var/home/core/zuul-output/0000755000175000017500000000000015137125535014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137142012015465 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000360277415137141721020273 0ustar corecore|ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD -/,g4i.߷;U/;?FެxۻfW޾n^8/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\bSQp#YI$A@EEdT+w';'A7㢢V"+aQ33^ќz9Ӂ;=^ۭ7h9 lr_qSq-XbsK، JBJbeOfOAsg31zYYy[N 1m٢ڶEͦAc?-֋6rR)? I?ytwpC'P/9} ƘwXe就9bQQ!.(GNp$d(3 %רx%z(o6jp}vE#!3M. x!0=k$}  L&T+̔6vmEl 05 D"wO>"J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFk:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\9cSH|cEyQp 'ˢd:,v-us"Iidw>%m&wU 霤8ƣ8]Z&&t.Φs-6߻t+,8MSG3K:RD-=w7j lW + VJ7!ɺQzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[t4*6E7qP'cts3tZ3lPuV-]fz:햘s >˧Byd}gs9QN. /ӦxbHHAni5(~p>/O0vEWZ nY4qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=OʯM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_Mq8!ד|$@D.ݮl`p48io^.š{_f>O)J=iww#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}/{&Ά+4*Iqt~L4Ykja?BH6!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Lh!*<-lo_V685td=`h$fZj(ЊZzk T d5;*ߙԘ1O$ +Jp[v=؆ R1kG,:\ne^ zDpC65M'-u9r`F,IT}x{qYE$"}xkqcU0F}|䀕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7aſ"c~L :4^`jSؐd 3]%ijo#^*˟ R (0\lCulɱĘʦ|k*+ŘOi`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNkmW/Kez4vӴ9].%Q6oZ=d3dIg[?p49IVAH$FD(A+Y<~Dƪ0hVA,ƪ$*N9cU˛HcՅG$0V(E$+eQJ=66AcUJ$\(f RbԼ ;SYYq{\~*ϣB6Id SpϙTZGe0VXMy }ojƗ1V˛bbp,xpS Hxɠɢdt9vXU ƧRORdz(%3S1ȴHq ! ˻"q=^S>-8!vj ?))vSRvb@S76ws㋘ύ/=j{|vr۾WEX䍀B'}ݩmꀽ̟yT4WaYZP0EM$U pzӡXeo?B?i2y<_[w羲XoǏ?k>?]f엄/&a(Aye* EE.A @lⵈ^Lդ%OAI̛'&J+'RL֤*D^_W2V?UG"`NBLu]T/gt7jV겚kη$U{+Ib"3jN-'KGJtKJuQSa ^ Ixκ0K^FVDmqEJM:}jj( 1-ѯ:eG_$Z|HayPRqMV_;r%&PXR_%rmaV$o2ʰ}g`][]%/?ʲk%CD]$׆s-sTEADygPv'j0`0. nO u>_oPCw!LDq@,QGMexLe]s ?:ugīBS8&:@]:\×~׼Z樰"2G+bDbv%A9ăsGqsJ'1D\^$QǦPk_S M3yI}~qẸ̀ń/*sˉa(R#sEQd:H /]M7)EbQ$}d]`⊑TGɚO8UD_̸2/bTHXF;E?>R!F<$/zBt|tZ+q=]ן#ъEEixl,!8ڳGX٥:ԝJEy:S$#9As+ 3I!g84tBvd >mu)5:.T%Ոڰ{47mVGىTeo1+vSvy:jyk.=NXjQkc>#^o]jv5bZj_8O8Agup,}Ya=!V ڏ[8@΀_)sqtt+}wx.qݝ9#rYX++7ols +#㩨 Š\Ru I)+<= (OnG,#+ouj0 F,󨍉Ra;N]˛#qZT9VHKkY+!ص-#mk:ju  f9N"4~;W'3@ϳF~:>2FuQYZ65]):h X-fBBy8RIv!}(1oIXO߽VmgGbS7%kdUk6v`.S,׮#BeRuɳJvc3SaXbGYײ60-om%vĵⱭ%Vڐ,/F*Cq̧i{:)爫<081IZAX9a--z7vt^W_R2TI a:9PHFdʧY̛J\ Q(N*Zr<^rYѾZ*Q%º۲=H,oxG\[W7e9GPW[Auu98խMˈԺ>Hu-. к淝lpsi7V<@U0Hu]/Bc)K^sp:(.]Sz;1n|bQՙǮ|h" ֛C̃X]8*"uu`OMk lEӭ^Yu4 _7= uS2yS$minDӃhuET,:@࡬o YI@iOju[OSкf4p<|GO6Nj 8WIETT7,e&vy;R5lQê5V6 $"ڼP g5ؚ14ꗛby]#b-4u]ܖ%8ՄYS))A.Ln+u.^y"zwlP=S_ݝnSDۮ%6Eu<5PU4x`c(w.3*$z TCDiiw\1P&4Ͻ::dYr1x}d?!k \goC "jtَtUzX6~dak /7~֌bO7z>i1k>woy?RwR#`TR9nG7O+4hc)A#:W0ǐjDѯ5; `Xhfzc,3A;0v39P4 1<(qԩb+v@A+X!Qw͈vF@F0⢆|asZqQ0頯9:Dfh,jñS:8y Cma;K-sd1`,=eBHufYٻ޶eUH.`Fǎ 4AFuHɎo~gvIj#y阹-Z>~3>7g{lPy»h홆޾nGl /A+#ؒ46Ujux2o|_r AW{GP:Eexe|1{Ǜ3V!:fl5hрQUvZ[t%1y}sD:q'їite2pG(-*oY't`wz&Uz-ت*0F_f$^U>`n8l*Q .!~5%`*{b߻ 'v h{ /f(Uu TlQȆ@]y4ʪO=zԵPEDjIk+v7nEZж?p\ciU+-,yrߧg_^<bpѦBqn!:v$b4qmA ayMCБ(LLjw) S;nP [sYEw,8 A91&JrCG|eؽ`] FDߍS!oCם%ۼ 'D<{C$p8;`*\znLaaڮĎQ]L5PiLPaw X N j`x[PǓ9L f.h9ھaX:wqDw' Rt=$A:p|3vރ.\f`hC0v x7h܇W sv]בkXx ddl7H8{Sk Ƿ~G+Lߟ*jNp])ŭ0qiaަN- v#=>{Nb =$ 0<]ng.ڄk~{눣p )\Ξ=Cn\N|_ 8"2 pMTcw5G}! 0d,AfA#aBo78H&T 0X 8bCйc³gp&q-) 9j!Z `L;:;蓢ٗЕǪ2gf}Sߗy 2M,2ίK*>~:;MGAWFeos`2wOGyUû8l*o4; $ _W)?O94>B;(|7Qߦl?|DݡOM#"/Ncr.70H1vxJO7쀏<z\BC~cy4Enz?rm4Hͫ-˭':8b쾁I{ P&E @oOI;q^k|Bh&gmS!H`wr:M | $&ܰ_p& N7:5_Եq㨫QMbp =Z/qknt UU"YЇaПCTֽ{aXMKD6Zaj;[kQp5]:a Rr~3כVIlvH7e^fpG rPD`'͙rr֩5 i 8odmyڙ0FMoqHeX(w[QqWe/hHr ]LclN![*:PQ)px~_ȫ")~oJG(ݓP kCJwSǠG*wXN>;>Cر{pw]J-ہX;q=)e;@;u$فPJ(߁P;q= ;@;u$݁PJ؁P;q= ;@;$ہPPB G';Nh8B= K*stQBOZaN=>{bP[I>3dCabJ}H)Rc/yy* { V6,jSrh{3ɪ l?3 J gUãv,/@{& 5H{ 4T QfI6dq;vfcGoe6 Įj5~,,gM9_v::\]9QP$)F`|yyCf2Tfx+!Y4>t~U({ eG?k#sH2btQ(HZΥ`*U9f/ н11l?׿zQ16nTRL Tzj CYIy7 V!phY`h t^xXdAÈ5>XaVhW)/JN4.'gr")& Qf><}xˣHGc.D*oʱ*dx>[F2r C2R.0ǰ}Cs:';yҢӬy~Mp >p^At>pmESs#}Ŗ*gO^V*:;Q*<KQzv੣}fw,ߝoV˥Mg9Y|}P΃{P9VG {`uЌiϖ+#сY/Kwh~q%W<O\ZxXC;%1~:}RZ/|w tu,EC?zG0Sm*- WZ$&cVJRR8fkG72zmU?ɰHG͋/=U w= 9ZV|cHTzE:iYԌ]'%o#HN6f`x#Vq ÓqZ$FE|fmav L~O$]x&fƐf p-5ؙ-5(߮*=}:rf>,@@EUf7D-4M%"pQ*m5衦DC_/8spxt[%VJ|'*uhφ}b} 7e<5U4*fF@D,Eq.R%Ux#u:m9}ΏMl1BKrP]!ԥ@(|-hQ\Q"jY})w=*q엉b/l%*K8.c%Ǜs۩ ;7fuc̮krcRb<6ADu+mFaFo Ggǽ-2(}B{85mXHRGI%r=j;VV\VG Ŵ_R-i̺VPnE+mX%ȇ `0CzhQCҒ U_8aʀ!rRkS,1jh1OE>'F?;o]lN^ֺO+d^yҲ#⾛>L92߉+EJ4j)(]tgw,wI"NtL_ݩ~r]X>k8.^(Yktg}~_׸y k7qw{MVO_V}%.==/11A8m_WwWS?˫=_] y?IG+{?^ /`ۡuf9 淏j1>79"@q(z&O}nݑ}xH?̓~4Ѯ ~>ܜޛ9!lz^izm8?ܗ^웆A9<>¡4fW~~mWcDt%g=Ag/Ն+0XXSt,D|*;g0kzmw-G@ g1 KmXM-3]Ub1Ȫ:Bt!V# 'ӒWb$&9Vk>N_nwc>rC\೉8#ӡ6m{At % R+UTҖ/rcYa1># 1% QQ-quiT!8",( "VQ6N' ;P ,$bq'  m"CKi0.#MW9]Xx-@|nSXˍ[UcZDǼ9G*z}N_~ %ǗPJA,ZƭCu~|ؒ8ӶPAsŚd[[Y6l1A0C YiÛ̒nlА<Ypb,:yr&Fr7&GbhR1RwA@,1qsfjPNkcui vflfbZ@.V9o& +KY*2Xb׋O&ύ.% ?`N2Q8τK Dc[ ZbeVtr3pDNlJ81XD l9:@` 79/ƕuDf0fza_&^nYl@&5!m⎕! 6D#mbPDal|f{=:椳q{C;k%47}vKpCn+q(\0=gr\\X4(U+BH 1F (8Ŝu%}Y\@Xz֐ ({C̅ɑVRM #[cf?zlX(yEL+p+Y4Meyv[?Ƙ̎uA&zo{%OC, )Hd<}鼮.ruNFNu"*YBGu \ s/*9L@ǧ.  ⬴(C'R!5lWK; 5IJK..jC= j7zNкA}gm!(F{K@X:tHpS਄ҏg5>ܮْ>nlHr(*9vY{fjSi=KBjB%i顥Lr@*;6ITF^AhX&ZJٻC2kp0C\oa݇l},sQ Oߘn-i溈%WYX"O Mx>u4ʖFJf3aZXհhPAe pbй(`SPQEf"|*bJ K'1֑88ɋJ8m,}hVes s1f]ݮyC;$VZDb!l҇VRvȦE%I?_PhحELü/?RLHYeFqa̐苪UhDz@`EGzK<^EqZgZ 3< k7#B@ppu㩈R8-/EMI3LbD'b7lI#g~nG{Ymxu4w|$й4r٤$^{#8A(CCp8YChB07$3䩝|iePVNk,% Zp{1)fe.v n_TK,4Ĵ$Kة5$vܽv{: G:7s3Yb" :Svjס9 3gg'W>*W BՐr# 6 eQoz Eb۸|3Mn"Ⱝd (YL, kXްRRKlz-4HޕLYy%s?! #s;0Qb0_LYt,Иd"V-M6XM2?.^r,~*&ٔcLzP :eENJ1$8*Q !C{%(tc YTy,^wJ2X`&DI8z9OcU?`TqOLP^5x8`I ޹BM+ a,H`Ųsts]Нʾ% @e0KT*"=0KaѪ< '!?`3ҋjp8|G{m],Xe.xބpaz0f(pYF/ML:GS%}lSB(yvXWˏ׋{7x6N:Zz0]Nik'|WS_ VD+ÎBX:9fIpT7gy2-'"q;1öϣ^хƉ")̹¾pg" `<)쨭2cy7L2SnpuYO, ]oykW|۞)R=)8׹}clekGwA͕^鋺;3=tpKؚ]8a4gvyZ਒6!ЅX) )c p%%~0.it8Rap488%!.4_( qaPScGPu7"^|;i;m% D1IfUL5{Z.7n{蘗01gF!&(1Xap0~ywQ1Wͫ#T$`]~hrsamV1úfw~z]$p2͘.tIHpIo8dn3aj=7k3 # nS פod0EHɬ.4lI3 M$8L5:[_ܒ8x(V&{%4uZX>m5(X:15IpZ870ܧxx3;6&\4+sa#L>Z2h%a2C^5C94sGgB՘f۞- vl;m̎)y)^ Y*]Kf*ù{6<O`'!XJ\b8H8ԒcBUݰmOmˊ* R6;ŷ>d!`:%023 7dɎB;Ovr\lHcǿ=ux׿Xo|ٮ.{\tb\]13kRk)Zrs8D)tֿh=J\!{ 4}ם?;Ѵs66"âJ$CEs f-|D[]A,Wﮗ).ӝSUrpYy3_dz]կ9w򢦋J!TLs7=ݪmHpLghTK8Hm;4>!4Qh,sx"YR\fnfHDd(ڡ7 Av{xܻ{eHpL=;zZig{' 9?zKkIhBՙl4&SqiYYqq]h,񞹪Z ٌ-kx,XlC< .hĢĘa+ǏmIcB3x8frޚ#Z6'Ԧ1N(+N[.qIowSGL*䂿(FdT4zǹ܊ :&eZ2$`E>F-x? 7ۼ"}lT$6HfI֧l萷PKlߩҥF\v8o FV`0N x\IVlqd,4ΈkJ ξ~ u4XN|F% XR؉o༤뇀db+ Wa(X:;oHp,shGn龾}ѼaIh F~1Oߖ3 BC?۷3 t+_#a齄Dɞ!eppF [F1GI `EiF(aaIg޴ZaL)] =4%mbא2obd Z!pS˜M&%[-}z$A3<|~sc Ƽ^nIp~~%,=܃ .p!(}F:A<L\ay1C~ν;AU5|]nIJ ~X(0؟q ;_ݐ໨sY* !)[^JȔ`ٸ$őbqi߷t~M3O11-t _\$ ۪ю 1',YAMŶ]N -msnw}.vJcSGL>ii)G5Z̝bvr۽W 콲 ;? ]IM5gmԭ\|,Ur >1TucUo(Dn mb)nc[ckYBDVbV|+8RJ 7/'An`:*.cvJ3e\i;üvm{bg#Ρۋɴ.& ahy/a ,wQV8dيJԇe$m"R$NKV?ˬ Wh|&.M==xQ$*wU3Y+~?UFA ó=|8M1-R axa2ɇgYt8oǥä zeu] ĸC?-L5)S&pQ2 k/q)zy܃{%ǽw?M~c:f8m n:/>8}=yB#.)>QR4N~@.: GЮ\ Bo'a'ƨh}cWkN/E|A',EյsJYV/*K|?'Y8%c9K@wۈBp2P:7U`R3.bϜ3שe|l|<j0֒1EǝddSU|d`3_ep\]K]~[a]xX%v.UehUyg|ya4ʫ8('ұ?>50Eq}'0`%`ͥ<Eq>AtP|;/1G+XCFӱ @1eI6qUEReGi'bsdQcN 1W2?MʲMu|%k`?7~6w~+ah& ou4-}&;(+9Ew>Ӄjj'\ V<!g͕7о-ԍw;6ѓqJs=G:)S~!Yiz%CFpZKX֙MX9)Oո1lYB\?.Gy_ fc38m^XB7zuikW+~iq&rA*y$8g0kk$|j&SGݹ)0oo=$^k4!IYOř3Sƣr& 8wc]3s#8Kuc؜K˵Z}\cvfNj%pMq~V,7^NiZG{5cCAN5k_kmvT Ņ9)bkIrmv5ic3RFmfd׽22@|PFy'7'3X-[ aM{?𷾹NT,u8WbH{>ԃS7>,r5Ӕt(rnp=r5|N9i(-oKCnFVKcGȁnl {*"4̓PLZp]wuAUxBPCiuB]&y# knΰcm#ƪJ..tlljJ2,j[/ w,&uԝy}x.s!m &goqX3!)ЈFCkc@6P.: QKER^1MIx 8KY"-S_MbKڐ3ڐĕU'AZgSh0>Wc4 "dᏼd TRUGS@FT(@ҸPTVtB_v{XTk9N1*S`hQB7)_!:u xIJr#Z%$A77MEq^ w%_U.x0 I5C/Pv˾y>-:o:~*H:$jOQG'V;) ́g!|C:^vuj}Y+mEƊi1F 6n[/ f=FZ`3!()A T.h7{b 5~'; AxwjU>EWAWXɹ12ZsdvW{Ep$-G~$u*4wy"RQ/6xAE<#h࿬j|{ˠ>̢$FDfTXT`#)E1*rb;vpsw7fMAΎ|2VlЎZh)SrK}Gb<wq77}X") .%db$@;ƔPSހyدw^r><&)Leh]npRΑbT}wU*v$b*̚AK-3aAPk,VƂ/qֆ<zKc܃ \-tҸC}-[j[՗GdipwdC7ۢꗸ݀3W\ӫi(y4j[C2z^, NӘCn0ڛ7/ 7KNnhĺ63/ЦA}-珧3I%هDu3_Rox$^,9WO"ԿO49{6SWqǧQЖ9V% dgڔRI| 'õYf}+(ٳgA]s<dqLJ&t iwM2ٕ:xc3/u<'s gcҍ nϥn-^]I\P:5N O"ևItuPV\%騾bpkNEy~_LD}s9m1nFE(iӆD$QE FN|_yG诣,C?@gێ) ,R T&e97KG[wB3jؚ_lj2AhPp6'&F*Z+5jH.d4Ag%7+kFێZiF{NO~X}hj!Dkk)&IC)X"D"N0r ʚѶVѶg4cG ++2ˍF@T0\D.(K [NȓyUyLJA 09(!K:sPB˽(ڑ+<@X > h*LB|yad5nAzV˼gy" ?~7|0~KV}cGgtA#8SnJVW䧗Y^bM{\Jݤ10%/4MWߦ]N W=!רb2؍ $*5\3qHj:dނP*\%n ]FIW^ 3&ހ hM|?Ni}50Ev p ŷ"sz,(DUI:f:Y2] ۆ Fg@N3*NZޓ&[j~.O~xߏ굴m$&YR{M/Kpk[}o^ozZBt>>ղ*P JqáZ;2@*h ]@#Lbvּ]yXz+(Y<0%|+2\V޶Ҭ$$̂*4)i$F$pi7; H@Eey^0"Hl^z o0\Tw< Pt\\w=ߍ=3Es;|z8|xxdC1P#?ܣ}@sXYI.R:"LL*/3YzY.n!CˁJG \Q5>گ/v|NBdӁҁ@RWLߛ\i|ӁC=ܕAӲ}MpV!`Pʄ`q)wDz`p(IHd)iJZ=&{xmC68P%x1%Uo%0Ïm'.!k!mJЍpd<ę01BѨA$YKV"05qF9S6m1J֠ G\GL(Bu@/6F`jlƴ[ ?Fgx+dZ1*<**/$ WbdlCGR* ( ;s,j)sC|G;f̽5+|)f ̱pL:H0U,aJ*VzÜ4KD9LlYΐha#W2 H#u_u iQ&3tP"|]Z.xFG=(XK ,D̝gD R)"؊A ׎{#O1Ij/&6qɞMXSE0BuRrO3Vt0ۈ!͝N@9,&`A(Lß4`35:Rf:LB+iK>Wl!m SP=WNilrwFkxMr=?{sN$bF4ڴ.|B`;,Uō/oT>yyl $!~ w5`kmMkh@AiZ!i`͆e0*C-AaialH`6HFe0> E B\ҞU 0/Ya134esj|vf33o˔SE)90q+$02Ҁҵb|8" FZJ|i<TP 2Vk1SAVsCC,YL>##s)NǢ3+s4J@(>hSXnQr@vB,.U)cy iEljR#uR͌9rh )a R w.ȡRqj.t݆K%mu6edfk(UP^A3\` 202? gX/]H1ЗlEG;o _ ۿJmsӏe6hj1Lt Bn9K[)sp!bt9`2E;VQ:Z.V00 nbnbPp?JKgBˬ ?1P7Fj;$b1CZm#jE F>%+bMfOSv?)8rm|MF\S1,_.ޜnP;Kǵ{`*;L@R;8\sbKm0L;VEX%1 'X9꽁ks=V0'I )VTδ{RlV&{'0#V;&X5H~Ӝn}mSL+\\Æ2Yл=v?vʬ< f4"`3{ojVs6:l"0 #jq+:CդNI6hj_"Rxh_/6aKTZb,grv9o4F<+`A{6?]hGIz/k@i'B Y "2wio[7ԲlZNK[Q<[?+|&(q|$oؤpC9vzfVct>ƚfQnXʸc"cmcvʼn2#;_L]yd1nYWZE#Kewo~K)4]ėlp) W%7+ip%/;d+P}Oyw3W4#*AM-8>;B`*s/=@ )v6s|_*ޮ̋N4o5ww=w񿿧5μFSۋc(Z'_-,ڛɕC$GN˳[j}:>m= j/tBҵe.==t~瓙o{3);1{u[YV0 7jX]wZj stK|p3՞f=5G8ߪUt RtaYh0= 4+ wIZBVԷ1^@I#_QV?n؄jݾL2rE9I' ٛ>4nM4WlY5 @yCJѦ.4|1^)>a|z9qb]EF,>oϦ3k\QۡbCYQmvڞ[ic<{կQ50lPg%ʠN&Ƹ3Cad<`alN7z &(k{7w .w4܅;pq{([9Lt)&ڂnkhQ Zv2a1a_?rT G 6(i*M>Lg3nkSvҩU,.:J׎jMKE\;խg׵%7ų|^tyc156?WζWaWë4~?]4?#-}pnO^u+7CܴA?R֫EM Wכ$KU>=O|dXـbO?G Np&bo!7ù;@Eh1s*gy an+:p$ "8r|+1w0+7} I_t~Vdxk-]@bdSw>_x{X^=OT"􂬖WVôeGK=>JH|`XW8Q 0<"l./ DTxkôFgӨzKڜELȽgo]dn2z5HQܯIY+C(+%CKC{KOׅ'[`k-e)]|`2K-o]ۤx?¿B-m4H<~!Wuu0h*&_~,z*oF_J1]/k ^1MK^в'[64yG T' ԫ'2٬-vfZOS4#VKA1`p9#)oY݉fM2{ NLY;*mN <%YErL:uyۑ܉ u< CjCiG AJs[ r1*2H$ȉԒT}d, ki׊Nȅ`GEK7_v/%<-WӏZp}VKad5o}d~QTĻ_!%RYypw>+xXL?hf]?ďhh=hUԗ:BҖ{mx |ku.CV?wd;,o2Fnr΄8˯Jc%zc/f,2 v3bIx#MA9>N%Fi*QCS?cFfŷ'q)|iQCS}4}LKө+w찻m2?љL1r*]]A\bE={3fTu{v3wQ1vFSf;UzF'SyF6uy&;)'OvҶoǫcCx~ݎyoCݎ)Qr!k_4 R̃V!Rq &tv\8Oޱ= FvSOPxq ҥy|> ж WƖ{Y&rizQrƏx{PA͉H4qIFUN{&4=#uQU]{uln jgԸ^%s(e!%Q95>x#J zg`'*U R pϓRxU֪hN mTaiyסUv٠e{1rQC;f5cl3K|ZԢ+~FIs :]c}8M"Wmo;u7}r&sfe,Ɠ~Gxɾbv'0;@qGݰ۴]tۂo@N bh!H:BI8AXZOop\Xyo mjT뤙\A DJ3|WL8}r+J`ir~  gg$8a1M&ؾ3[Dݚ4qa̠2mdT j`*[+*a@Y;W*…7qdH!,5YvQVLcR1z$mR"h˜F:/{Mcrq봭H6PtP[:`Acu|iwd; \D&eäY>'@'*k=JNTnG/_O+rsJKJʬ~t m*"(45J$i9"6DDø^u4h{ߚ/QoL|k9?4k6~>=WzXfah^n#,1`g4)5RHֳ[9yez*C Сȧ*U͇R̿(2 7OX3QI[ WTu](*p*9=E4=Gdm$u?zrt#vpDs*jHP>LuЬ]^7 >]y2 m5,f'thگj(I^HZ7T&H%Q "9"q fͅU'8 릉#z jC )ACe(BƤPUh9|^xÏr|t[ bdE>U}+"#:H}C]EKdBLd$jF?iĔ\a'K2 ~åQp{B"Go<9*w];SJgAyNچsV xU9GG8*LT[j ԗ[V_S `%lDep` J*Dc)mLZ[n-|ʕ)~Kq`K D:pQʑgj TjGF!UP*5UL̹@ײ)A`EdW3k'jײkݪ:ݶ^%҉Z,GdN+l{q*#BȧM7sس a3{ ll+Ld3kˡȧ`h2I=D5 XIFg%fPW(J-{}Dfsa["0Kep݄D5eO҃utd wTf5ɻJ$ l݈ԫɀOv%6zG%} er)->6*7_DũVv֎LBnȨTEUx`R@"7~ PJO~W o.2&I]ȧWU`.@Ff4X2*69sUAPgC|QGTc 5dT\.2mafD,< Tq8?Z>300{_,/(VuYp> :5Pm>W̨8's}]qC"lf\q635$z 6k#^9*W OznE>UQIFr\ԅgȻ/| ]Xدw+%k(Ǘ堲{r*LG,_nG'dT\KeXxtQԁ܋.Zy,g,rQc5 ގVrcr<uCG}t`ˈk ."Fվ>C 'xb >ӡ'jWc> Tת6˺P^to[Ҫ wv7lTq9:*%wL`%LL&(3bl'V)d.00c+z Qq9|:"NYW ͵eټ܏pQGOnmA@& 9@tƆPQnyFtp+~Y?.B6 @Hu\ B~Ps\!(7%qBbn ЬN@ -Uixa5> SeUg8 2C|rx_*W`G KؾAghnz22NO݌'Tψfrփ5 Pmɺn!:#5ܪ5\U2/G˩T eڌ !XX{Kkr!+Q]9ww!||2ʻG {8"1plkHt!HȀPG( E>UonA-pKNڿ߅1v{Րͮ.AM&ɸ ?Q>Y2 t xtTLi}}eOشMGilW\ugdm kv,<dԼy)=<܏Y(|+,m8e͏)όiübWM*9^6x!_z-^ގꡮ5߇`}X&ǼuH?^RA۝N$/&J7)&5ϊ?9/]E>Uz>[Fh|@F*wV|Qșjg͎8hE p԰IGfDG&&(*/r(Uz5Wё}cٚ{`L*4PՅIpa Uf,`<8Ϝ I)$cZ3`wTUBO4#B٨E2*nZ$L]MUQMaUMLnF cREx֜bL9pX}o1aZki#Lɳ__Ha|a&fDYjဌ UˬFښzP&oDLE&W/ ߴ_^q+Fv:"9`\/wxdTndnBiD"~5=w +pm.^_!dT\$i'Y7y]7i*](ziZA͟߆ޤ0?7 ˆж U)-nn/aw bcf7E1p))4yX3+t^&s^ Nچsb1>!D#vQU?"fiGp8dT !-W6 {璙z *qog>o\.{a0[r>_vf2*UmuGͨrWgKn[۠F4F dTj.ETXa|>,y˼E>R:ӆ&'ԥ }qZu"rYθO~aHN~I-h,R1З/Ϲ&5sLy@>f}IRn> ΀H nC.kTZkŹjRĜ\?H.wࢷ%\tDX{yxь+tqg}-JRk̈́ ؼI:bL7"OT54(#MJ5a'6piHV#}qu"ׂUKZR$fFn 2hXJ;ɜVUb`%[tF F1ifH;s[ETỳmdoMC&E2jقtoPȋZb"I2j-YKXcڌM iU.(͒NŌ~1fL{#P2*.Riu3?*gWvAs6{X/oQ59I`%&A u$HɃI|&UtV*X9[B߾ VQ"^Uԯx&ԸBzr9yLlp9up) q6K=M;`}8cqoWl_%N%FMYP>e%EG6{)9_ճoi[~]U>w@IIk,??v% ?d4/&jOUgj{Ƒ_ .)Jnen&dNrJҚmGdKj[njx:vECd@!-1⍓.>Fۛ`vm bGfg'dk&W{> }0HG&HW5(eY M2$z 2hY2J4UYCJ$q c\Sr#{D[ [g#<ͻr, kDBSm_,oSeS,g/vcSK3/fox \L\BJ] ইe91i 9ٜ*/iS( ēӯ,M_}e]]\HYYZC,ʤayV`*8,F/CHagN>>2= z#T>(d/6SY!eJxBwW<+У!Ĉx '-i7kT|3 g_e#e<ks)MYiÀ-oixY8‘KUbgIvȵ jln5Q1uA7{V)BOB&M9J *1y;| !Ti$[L#MEq̨vQ+W%ԁ5T:72i)aLG +] {o݁>(9tNg}0=+Ͻ^]{j=kF=6b#A-уach&] s؝BUK'Zi EC<ޜfB3m4B.{W54xReV_eE4^9/].qÀ{S).HZ%‹睊pR?͝XmV1Y*NܾT+RzqdWlDd@m `.DӫG+ 3J εVZh%mU5w>+vT™D9̩ncBZy4^) ,e9֕d?}BJ!B?a*s"H&Y{wǻwu^Hu2S?*|6-w-~..ZDTB _/Z7&dX6\x5e}jnpǎN2㰺W|on%-sU+<1m$Q ,AȵgYFg/ 6}Ӈ8%ĝ ϫש7f"Wme~/j.c/Mz5m#U MUB$7ؖpT~`bK,JXwl>cQ9#b$(QxktJ&Xy2y9k3%ynV^{Nݵ$m@n6v^CGMZWWK#װ@ΗK╦!xܩ̒Nc:5SI[Ȧu'Pe*dn%AP 8}"_+ï5Q޺SLZt@=9j;nLhH\Wɨ}=siBʘK߇$&+Xdڀ츧#?NL4 X X1v6ݘd2b$RI5#xKX$]c7ē4NG#PkEcm1 [\_Q[U3>-a Q Q1Ѧ IPGkً'xBakTr6s@ }޹|,WnrnF~{#>ߪ9&Q<6OqE}]<>%7NsC1_[7#$2t{\yוdּT+h/{}+:־sj~-(b^ݍ g1 e}4yha;Cʷ@?V ,\-0W~2nxZLcOH X~+̥qFm ncdӍItaF^Gn ub,M7@~ho; ;qvcex6𯪒ǂG&J>qzDg+uc'Bߞ"E+Q-1@>q e^OaD2*elhVkc?}Xwc G0j1fxr"#>.KnL1G#YtSxwBU X KFݝQ(7ֳ1 7OqCBrNZVK;Ǒ~L `kC bk@j qeOHhV;Ɠj6kGWC]5nLjFRͤYMGw! $kdY܍8: g&}aW.#J1|:'G (R΂Bd]$$Z6+41Z4QXtc*Bb<5),))O.> M!UK{,|PpY lz:4OE|H|۝LQyL{}ģK$C/H$ا`|8J)MBShSEAzC>3D 1Xtc~~߄~-DɃf1-k6ݺ梳9U~ٱ)21?"n(0XN9a!mބ(A,Cpfx,@9*]+I)HDK Tz˘^ qsE(gMz>IԢ#ac6jH*cx*{ RbI0"3 UĚzPTafbݍ)D*Ńrpn?_b4 u/nL3s{rGbZ/" |s=Fƥ["ԥ,*h1'GmԦa!6S!. 6砨TOP[P&QYDUtS87U%D#P[/\:܍IcL^{7hCm]wcDGlFV^+9 & r/ @xR$տUI}F#P5 1Ѧ J#[bA@&N) *W[POm9ҊDKAc"T4J]ǁԌIna+`NiACuyF!^v B3*Dq!s6!x9{ffU&{In^6,7`QޖUgU#Эc=A6g԰4R/ Xmt؄I-%h]vywy ܺB 3 J1Nt|p%kq2aށ m Owp!i>gx[0M t:ANoKgJ" WaM lrt7wvsqw^>QA_XxxO`6 RbC3+)]_1 0aeV.lQg-j0ўLo- Sah[^=imF53"RJ2k !/zvLGTk~g~bG#`fs4bx`sw>8YJ'%kȷ$؊9s5z3,3q(1wjnL2qb5Jv"y,k)`@lTzn h/IWۭj,JVzE]8(^^-5w{W&R}]T󁉰0t0M]K)uCoP0VYSZSU 2|=WooUJŽ(.A2Ͼh5FR78ca' HX-P.m!geky+?3?ʼKVm3IKgՍ3Wup Oo{ـW _EX)=54ڑnLPMmR>MթBW*}we|%Ee[y#i^kK AHXȋG$IXsF$"zB*aI*B1JS}3o`4݄L RwsT5Ti6cz:_!23HJ}` 20Hb#UElI }NK6/j.%t4ZY/ {7owqR84S.\=#>htd磐dsnh۰ 8Oޝvo>y?Ǯu{.ަ|k z'wx,ܧvj=aE+"4_?S#o>ٛdihFƯj˳7ocU: bj]l~1s<-Y|4ۯn=¬=-,`b݅F;*8g/KOi,+[Y5v3[,o߼!Tykgo\k>.#x)G 0$}o_r,|m.x"cem=a05+[D^(ԋq&VEBC@OX;&1Y-)8JLL*ZK8ZdH; Bct #;Z:oz^%ć6l6T 2qL%FYxriRcEda~2_ʝ-J*)o-a aSJ[(On6]=wQYW}S]lkz1]DW20m['pVT17BL[,\X+M7L!ÁƵLς:g0[ &o UYTOr4(}&th-_(Iiʸ|JIa(M7. tѽī,޺Nڽ"-yV糏']+ӧoҋwPr:a *+Q+#{Prk{%OfP^-^ɂwZr3AvIQ YFI%ȹX(du(m:(i]_@|ÌK84 . r`xyL9][Z^)!Md_#O&J6/ɝ^3U-F"5(;䬣'Ё>VH 5Gv w O]$ +<7`K4ƦVP V,1JVaA[% ~-iw-z^T,(DfIU`laUDm349^RWc)Q ϭI%'p~qP#xMj9+óf1[vؕ:4ԫY[%ű0T. >ncTOwEI(PBÁD Zxg\<^9-q-j(!>QCh0U3>m}'bRWf1ϤZ*5L:ǰbcd x6gamYYB|4>` 5V1 A%SD8#⠂g(@ OPҮk] %ŮfWo C=N쏇KY?dgo4gK%J[m55բb'}`&@ct,EH8LXzǎ pNq afǿu ÆvjFtͨh63`iԟ"C`BJ|u a vn<߻p{g l$SbBI:ȘfM%d־X-lj!'"=vJ$S2BQD`g1,iy fC-H5WcQ:j% 6Ly_o|Z@@}d0 2##F  sJZ_g~Rb!ޢIgɐ$Օ!ݰtՓX^k\v`oѻco0:^fi.oGSG7_5Z/LpD%{뿮ŐMP NSIrcF?6x`g?GuVk-&i4~*!J=-x:Brs&0NoG-НIz O,r7q8]},kWTT3+ΡD#<"G^gh(v`뚦Ke}CWˢ G3EZ|K,~?k٬Cb`r' oߏ>|'J^S.Ɯѿv+_?`90漾UU"ߍ)oyV^Gw,KӞK|0￈"ځ>E53$pvPʽZIC؛053$e `'r'I00/l;;{Ow^L [LKKw ;*UE`;Y;RW?!vTE a)"nD&D #miEqT.XɶmxJXgCX5g]GHj#LB9`b /VJZ\[CJomEkkҖ;^2u fkFc|֭U\J&AJL堨 O +iPPbVnW[#JKv#FU>IyYdDYCc&_Cn& gƀN%eLD&$S#Fk 9bK KwoX2zvAs8hwsR-tN_O_=MC8=/KDzZEπ֪WzPh-ݸ<+*TVkE2)F:,HFhJX)6bL|ZCԫReʪXeu&]p;A%7iXB*MPQcPxgIkl mf4Vi޵f}I-$nn h#S^~ȱh~~tmX}z7wŒbEgSyy:\a jHHnDv$s9 sN0;D_l:Fc;OuqQTUu։%GT˒qf˰b^}7>BՅBKVKc(%Ee9d((AW$ˍ+ii㊈j+_>(U+fIU}6g\iϊouԫ9UlA `C<@-CΌM2XNb^Ig( `8>y0 SIHyy`ڜ47T8`q Ζ/!IQ&tֳ a·8).*F?sJ[C]=0Pڔ 3~6@&۱TAuVAɷ/e /˫p68ڳ/1!|(!p$Pl bTc&  _4|yu,o-l9N>r/ ^iOOQ$oC@?\oj/ߐ*ULC*iq]+Ґ`7k=L'oz|VlR{,$7#ǣ@ i81RV]^B|7̈kc§k)+"  "'#5>FYvgxHђ">!"DLX(}m:m"S/Rs˕6je5 }ȇfMU߅].F"pKP$Ǡ gP:ϩSQ {tQr% @RcXr"(,hng 2,ą'+UwuxƬw|(sum 3mRGACg7tѼsTN$/_O{v1 /ط!g)N+~0AY˴&c6DIb.*N7 kL'yhsuwRNGK[fEDiC,to7)Մ$"( ΉB]dq0r,;!9Ը~V.>A =hT7հC:pNzA݆WPXraE@NrC\Q\Y|w?_dڞ4?kѮ( @1NF ìb>eŋ/_֊i .FAj"Ԛ֯ a6M nx}fkĺ,IkMHFĒG$zPxI24t+ !͆lEP7۽:Pz:^_!}]yxxN=nW^N߹˰^";8'z|U@frAl G]>i?Ѝr.?h8C {9yos hOW^ܹZ[ %eھb0aPm^yd:/"\),E^yzPuupe1''s~K:8.YݩOZxm d)s ia- ,$VS=]yD*Ck^./XߧOT9^cw+/20j77>ӿ\S>~rԄ?C|- @F;}уG` ><']Iұו׋ydܕd}\yqzad+۽!^wH@z!=̗K܇/bYR-{Ƌ+dY[&ۃA#jYUzjӛˡKi].QXFiehǿN*@k ̭ܿk8)/+DVJ.p'yRle !sB$0:mH%"qi@EJnɐ٘e`&*O_#N`DeV ̓Kml.i`HZ*#Bgs1b*2\4p;XiY{Q#Y91j838΂5Sős)t4)ћI׍cj5R ChP]I}D8[dN&;9yOb@]RG`HUǃ)_" s2-eQX8*OGF=y37MM1k L25;ƕquss,Ih=< 'C>I9)xy\g;B'Ʒ<8]ͩѻ!TuKc'U*DzhXXSzxA`@TǤ /$صIZãp럀tAalq|b>p:oZ3ur=UΕ)[ L KH?`2-雄5td˿R 𧓒{xN"M]p8iY!s[/0L0{~ãptĻv$aQH Fcۋ $Xc( ^|5dw`Ԯe00&(}ƞȖGa)4U=< Gj4`wq),/!#n+)FΆK`YrM\ X"d(^Rzaeћ jd8֛0 vFzxNݭ/!$ O1r2C/b#i K$+]mPtai>9icQX8:,f~ۘb$ux8L\B(SpRdãph~ep}ΰGadVnPv'\P']ȦD6,!y )ԫ= ǵ?k5`N"c=r`:G\/vGa 'S7(%)Gr5. +#_"P2wOusu@"%pVR@(<I; ԒPãpԛT %͍ q͠deO2Tm:ݒO.@ Hgt벡ʊ{x۴N2^0z*I r$ΧU=U3c=^V roY?L(,8y31jymNtՁoߥ4wg7fu6g[g@If϶_3h3)cQ/כP;wnt?+tpdC %0T1Ca5uC:&>]X4/Z LHRQX8LfO=g:_fqeMGDQϟ?>~]y@Dž7OY-歙!9B'(λ#uB𺊌 ֔8mMl~f7#U`݇wdjZ-T9s#X"\ IzpEJd:vS&b7nOߑcm>aX79[euޣ "~ͥ-ZZd {ႉP )׫Z# ăe48by<پp⢈e#nv[q=_n`?GfYtjVf<<} 'G)D@mGΈZq"6 Y=)mkU8loWn̚FWӹn5?Ņހ,=s}Q4^3+䛤=pGU&q 8Ji^ 1‚qΨ&H(HZBcHG}=3FD`l*(E7_e["*ʬ|j| u0 5fB`m |UNfVZqBZ:֌U|36ň_vd[!jάctă՚Y=&T#%Uy}6.q3{hݽͤ?}BBxӿ('s3 7 k׫wm`3lk&K]6i/CCrO|#@m h&=𹖌Roau-FM$0.'=νmwL@ܫrA$.eB xOQ6QFR2z2L<D*9C\V3;rZfaeR2-#}:>z(5J1 Ga PӋohvR DұᲒZَel:< wzUiQGZٴcNޣ[X<E Dp<(*!8ցQZYDwˣp,`]H* Z$ :H[(, N00L} &Czn՘]\Oo" au !Mli̴-.odUVZ *5f$C.h2sEpIFDv5Մ*Xl#cp#Rt.v:8{!\ϩmD;E@ަ &6Rg 6Ƣ w 8Du54T#OcT| ˓Oc d|TU#`?(jB, 9&1>#Oe k'B! D<2 Y&/[2~%O@?ϻ7I{oĘ}EƬum Q:h2&NaՆOKo7Ļ{RxD|t~l NfSf'_ӹ.8jj}>Шc7p;Nc*x7)bścxރu;x9Ywլ'aޡzmqjّE@a8zrg{h*ƝWg5ҊU}L)F}޻΅v}p􃊫Y 8hC`8*g,T( ڨ Lz8SI5+K R`,C%#^,` 7Bfr;mh=^|{JhwRK;E.o2qhQ1T)-bSp Z;2pXIc9&U7=;THBl/|eX+ Ga>{VCg׫fzu8,)ks[/"4Dk_sVZG }Vϖ\:j |-1c\4lPa;c?N!~D5R\ +5ZA#r$zB!bRZ< Y&/_4:2 9&[ B rߕjQkʆf< ݛ xwDS(dRu>mڑ'P1yK^<ʥ¬dT0:ndPF_Sx좯IW6`n5bIXwB+AfٶR(X d|B'P2yn\Gػ7n$U.Hr d3k)Ǝ~cZ#Q IEV~Uʟ^ Mdl2j8鰳p\']%gSi&S}>9/k? gR@H^ϡJC5qJa6/FqCJI7j3F;UiIv!sý~qF^r=Iw{1BZf'~=}! :_C|r&j wuR:H펛c}İWH Q3$sL3B⹒[IaV5AZ+PM))Cjz Hk#bԾ@5FZ 士Da߆3ʶہdGMƳ ɤfk>\"[[cI" 6c 6b<C9DŽRn&x9)>RGEdQ0A[R.k%!RIA9GMF{渦DS7v&S`x!:}ö0H|nf[??Nf3 9` Rbj_As_@dK ֞>>gDT8 y쓫):]IIb 1Y,-,xGI4 Wٱl`Cb%]ARDKleJ-mW U}]DB;:Se!5_L.S/O`D#!0B ">嶴@E{qR+nY" RS:`+琮-2b #LT\(wp)BSϘ8P=d:F@z4‚S'4.ǽ"&:i7q'} }YLk?hŲqg _`h${%2P V[/bAxݾ>#䀡 ܕ kmkL`֌VZ 9Dy'29\NE {ĴOSmA24)g'UtM|rb;&O=! _"4;ZDwVӭA.5 (5JEPj/D'Q\H )Y%EZHKQ)E"-d,ҒEZHKi"-YØ ؓѵJ&jty~$u߾0}y&P^~ 7Y{yUHƮ G[Go^ [C:|A(9AGN \ K&zJ&zD/%dL"%ddҝ)L& @}kU4vvgaC ~ sk lzXKO/?”"\ɋMYK2yI&/%$dGԔNux{@jۡ|[f6&FG\{;}^'K0TS5WYO6"<-o[9`YA]Ci;A V[[`f_$ kY].z|j=kz\ 8Xh-!"2,%{|B%ƁG1 /п.;*c_tQuEޛLR_4k۬ygn_ ?8eRgQ  ol.i.Mw{Lw{6e SH呁nQyļq:u5#9 +W|}Q{fw4&jw,SHxU ^WkՂK>auF+妃>& %Zթ` GE!5XxZZ3j5,tPf_UvЖ^ī h:QVYW{'\x;GSy_8Q LqfqT_lV^՟S?rF0B /UoK_lӯtn-Ϟe !].;_(J};k0^CP'`uLdTnFaV=EKVC3|g*禍#78FɒVD_d5.x0MO!ڱS}Fze߹'vi?jiy'uu5.Pq b>l(;ޱ[:vW/];ֳk1D=TM+3)ePvIԼ\o۴xvlڼ^T&#i;*4W)5`^kd% 3:גF6mJͭhW (?GR+~iXS/f\ޫ&Ѭv&; {Y U27.|&k35{pp'al& _?F{Uq7`G&A4&cg#}Ĭb{85T$Lڼ7CG|zdtz})KXCCsGCxYI+Ow$b$+JbTi}WZߕw]i}WZ.JrTwGg>XJTZ+J:+J@bL[em8FxUQ.׭ĴxLן&Bro_Wlot'03BJ+}fTbvc/H"oj'x)Um4 5"Dxy_NԩC> D䠤\l235$ |;3vc/>Rw*8Lزto-to-[to-[Kҽ)B5M}Ƿ5=46~[uߥwjRj+N?_`<[@TS2ϼRjvj#^j\5*\g⛿64/n o,,N<{/^âͲ$^3q~AZ3~V7(ϣjM2˛̚Cwy0b$ei-} `D ,'T0"(DU[GQhXq z97Г T9̥S00/; w-&zzVEFF0.BR&0rAbR2"=TcdyL b08JAJ'0)/ZA۠FH"I`ci9\1?tZ)<9I b,ճֺn:M4 A bLJ\ȭ%Ǽ"Yçl6SREvIc\S˥*Z(bHJmBSyR!GAD-rG֮Hn6ɭ˓"g;)(F_=Gil1`؞z-Gc0}_.:Rv"^ɗh0Hc XF@N+.8Q$P:@LiuǑcf ]ܽQ:u{ױRhl<-s\{ a\'LGER[HL|a8Ψ&H(sH:a1'ƔG0aT=}-ךk|eS a,8l10eDP1kg+k}"6pNR'Um>D{yg+0Vbiȱ֡[{}V+5#⑧yfZ9Ȍ;aq|HeBBNeڝ 9@- v֢P Qx@]r@v8Ռp7OLs.9&۝>gLKoyo=0{܁tMuY }ey=pG/&P_I׆!y.i ND1D';͒Ƴ@QR9ͥ!iAFK#1ddGC(7!W2P J%)#^gJ>Q* DFy #*C79d)xMl6RIB Qh*~JR@x8h5#D qt~S VK&J3LCPJE:s̳t(:\4[sMTvLVNd"Iu&`TdQZ\$!pa)9KGD(ur \?WI5s \ÿùhZm^|{#5TI #[܊fYaHwA1gNTA_|K[my"$A8*AE3Es\=#Ԅ\e=C\e=WYUs\e=WYUixw[j/9Ho gwo@'!C6F2ȇ%T8 y4S$]ʓ.DpCLDK9 Q e-ePOw1N7R,DFP4omS;o !" |/FR^$SDȁ{U[Ck{;~^fVz;ʮc/PEYRl+(&&VQR1qV{t#iZ+IXm;܉Yctt3}_}Zr6Vv5"|z'z}Z'2Î ^<Al5+uK5)e|[2^FaY`j^ƢTϛnK"."}ŝsjUydF⩜[c,oO~2Nf ^e s#x.*mHLG V,,uڴT#u t?$! ~Enr3.E9-LI&(G0#&F њc3HR^d|b·oO}P~yL%)nǫ"TLȜ 3!s&d΄̙9eBl9Y&d΄̙92gBLȜ 3!s^'eBLȜ 3!s&d֖ 3!s&d΄̙92gBtX'DrYHp5^._r9K}~&#ɵڵW"_tMk*O`sK»"H{R嶽K/Pz d,x,b zQyԝ^Nn8NSVw:+3x{t8r7,k|s۔;o+ƻ8l7: ƬReۇ@׏krۺ}k5مSy:5ܛp;+fM6B mi6xrܹtQghTO'!i)_.~ èMMRGr_bl~˫GQ8MfW3mVЇv41k?(MvNY 먷ȪָV_5=RµwpnuSގ(-;owwSf,=%4FE^>P㎬rk+F]&uPoQc^PPྲྀs^btc4C}ӕяݧ0qcqeeh>]n4 ύ$I,,i ۨߊیBZqړvʵf_PW|V4뱢=& HF!b%|gƖؒ[2cKflɌ-uXX;w3ɷb6{h)1\.xĞ~٨%L,Q&d9,'dLr2YN&]yy%a4yw|p4V8'r'g_KXxR&<̮ˋC(k4Ov;_1jZP0oi#0U(鑷{B+;uޔ/>\"(`AGeCՖbi)c%Q1yn1qc?IU^ֆ pC}m0j~JKcju)ê]hr@Lr9Hџ Nz +,zȲ4Am(^ Z0XX?%Zj.6deEuڬᲚ^+ PݳM+e <FˢD/N֮,Z 4b`p8z jaeizdӑ^w 9kzf[+'''5-ݹX #>TڑH1yp:% qf0[{acytgLDCAWm\YzAMm`2/6RVYڽhjևZ1LMgᶜי2eJD/Ӕ8kTǃJN톖-!^GTxWSq k[}owFG3.s@[jÃyzެZbw#0ۇ0Z*T}yCN\KMUV_蕒Ɨx:IF2T AD vc{_1%)a}W„"gK26bR}jmfUg_ppc2ޝ ~Uw:hj}rQ؋kjvȒT{YD>buyȻ^^o+l8e+RݺOK}|WK{1yh6T۩=6t<o/Gx 62>}&~H 4z;q|"#Η〭Ww0R 7yD1)dL ƹȉxlR@Q7iK;^3<^ƀvvax >*Myem! q:7ˍ^N_>.cZgQlcgm F(pJÃ&\<#n6.gD Bր1T)b+t@Hmߜ cL "6 !\_FF׈ٟ ʺjƤ8^V0X[Rj f4@UcA&zG`G ׺"~kT\-+qQ 6@<M6# -$3:% f&6X9VD)  K Y3:DUm$s euFm*<;EY繗~En Qۇ=Gt-} v Nu6?(% ?8X,Q` e +gI1ɘ )Ű)f#Զodawh^ԆR&8 ND1D=,P i)YrKIQc׺u܊M M?'07`VFlhA˒gR"D찗w~YG8h x"Ht08ftt"t` 'kttAMC)`UaE! naOw-WGzu|ݢ%sǨm8N@@ ̍9a\@k.H̸CJFǚjn2P<&քvAV:-2&D+T4hXY&D4rfb;2k;A\-ICv&ᩣ7R<}'m\DTduW9Sv6:tBy[ʫ jESYc\S˥MYQĐXH ^)鐣X T#Uvr[%mYp+#J<ÂzSxEE"RGJ(WXjgEC%DhAVLZ"$" MTpDRQ 6Pd4mZNM;+k23 25E`KuL xDZa2BԂg68΢ Ca(p`ІlmQ'iR,yum؍ +S%(fҽ)%HY(˄lrdT}deSJ "J[[`I$rF*P\ PZ_ (/]}!dPHшXD0pSAaC"jTd{޶d Q<ஸP/].MעIQ?c"KH;q%)lұ\S|nM"'&s +ǰ$kkS&p OʰXK-c֑H p 'V!ڎ|UΟLmO/;زQ߀=*a#DR+nJnő2X%MTD|%"b I%"tB@^uxM bz.*"#BD41 pZbD),2J XF8 8A |;V pȌt^YkDe!"$TB 1B"QP13s$¢Rnbzi,DC\ipl)" hFRqI,PhqHPU#"zEjWD+XKk)E ÜC(9w icl}(jd#v~^dX|Vst?80W8;gOOg̷xAIZ迟+I :z1xԬKTcxBUE'6`d0^ 9a a20_ۦ"ĄZLkx~=-r~8RgA#Kұ[V@y"QH. 7ؖ-"}@]<~b\f,wפ9|/y]qrD΃EgAxV6ܙ#CmIqG03f="m#El]@|v3KX#"؃$k4RL@gw,w5pcV9n$FX\R<uۇQõw@ԜWtmP`~`ƿڂ B4_Mo|j[]`G=23/ߩ:Z nq,S Th)&*-2qP^!ʌu!Ϗ'T=xUOElcFҟI2N/.KRlm[mM^W`W ]nLmv!Db'5n2-lPSћd|7'tę^ɕlJ~Iºs eYs }tf0%_͝~Ϸ/~ߞc"_o_/@ uk=x?>l@W5Gp[5 j+7ۜ[ԇ/bnY ,ˉ:͋~Z_b3w7װAͯ7 FG% Z 0apOQK##81Ra012_pf48&`3j֣Y}"G&S3ȷQZeB5 "Idbd"-q ',ҚLXY@l/,Ȝj~-h١}(p4'K_T~e{ R>ʽȤsSEȁR`A޽ aB3#ޔZ]@/A+|ӏIgc&]JY蹄7{`l |A-7[Pe^ ͫorqY B2+ iRy%!L5 9EfS6fo]u݀w\͉DGm]ʯ>eK;" !-iD"˜~(bU|-]zqYQl3`Fn[6-7!;IEU²m^3dTC nXt!sB/ Z lG 2yhI9I3 0mWUKS}~^}0q^#^\y%e[Ȭ:sͤ(((yf< a[:pSvJvzVXH 3 u͔2+LL͌Vn4'1ŭ4.hS lr2 HW+rǻF߀~2k3MQՑYq2F~ PG] r~^qY5 хj=$Y$:VYGGWZ '-,}̹CQW_ =S]_"<|Өgwʍ|#o@?Gz?c RIg!B65\2OuX |,3 GƔkOx4U?{O;MRU$ԯ=yJABF_'fJ10%ϣ0B;޲ί?Cmt"i6osnu'Cfɻ~DZ}ʎ5¤o Iv4 Y=#E≸>l\2Et69^lt`O܈K@a61}} FwVR=`rL\Ӥ hyNiC'$L%6*YCq2G9m##5~$܁P:ih=lζI-mWg9OQU>\:>:\:{=N:IE&[]qY-SfmKhr, y.@A=CNZm;}חQG=綃hr T[V@C=O?vx[ê&ܾ/IXu)QqRu/ZXG^c]FPsM>?8OlV>Y݈viQ6y쓎zYRX'-2 #L.E6$)57bȞm(.ߢ.45(Qo!yg-st؊Vٹm%8ڵV'/sk]$gH=2>=gm=1HzeDjJzvsFc_ئ o[K$} O./SYE1~ZwDTFr ?} HhOksWX1⪴sH@LEyEfH9_=5ߣMj_*tKB CQ;Ø g y 2+þY7:[}/[ݵ[ ajt޽WI|ͲL9zk.WxΊoXgd'ҫ%sn./_܋ETnvb$l,?y[ Չ Hfp(q4Kc!ǚG;) !}d6*!RiTց88+_o7fKuéo٥&9kM [ `eb,GRp&ܪ(:c6<{ϗ>?5X7H| jգkB'h.>!nZ|fm\>2#?Qu~fX+OheŚ3_\ƴ,KoݏL#e<+߳K2nT},zu9j}f?^% ~}X1\6eq9$)xj`,3gGA2 ,G_NbajsEZpni~7Xg\A@Z|/ϯ30]H Fs0ӛ+2WvfirxIj@3F'8FG:04 &tAk>6e3PfR,2_Mcbڌ1>/{kS꽎o<_O?KLbQ4#B)$\LO䖉|Ti#HL-JÈ`Q\B BQFxD3ނ: X9i#8%x5,_^K| 4?.K雹,)7IA!,ʇ+ox 8e,^3hfb~>\[YCYzxdX&zؐUǴCz6܎wuKm;lN(޶ uziD`=!+-6簻FonI$UsUo]ݎOa2ˎPU$\rsf:Yz[v9?5!U/ō$ 4?xË{XqUY# gWb";T{=ȡRYA}r~Sq EeCwR⤅@/rx+2\0ʷVJU|)؄j6-;'/@_t,mT1isqU9*fB8&U!NR$al(rE1UPQ$G%c|ݖ1:?Xv%S~2^=2Hõq(#cd,2Hl 'Ubkn90\ vrqƓKGSBH"c Cti+f-61Oo&IhjB) e[ r@jS?: z)v@).40H8o"PE#jPVJaX&'JPiM4J^s[=' #lQdcrȰHr- '[pI,Z3M d"&\ PD:0rcHId"QmYZNC=+귧f~j6T_D $fL.v%Sg6IL1 .X-1I5iUM}._v3˿3,x25EE1P K3 +5D\1/{Ƒ /Inl@?l(R&)S3$ѝ,SVUU]]ƒP3&דfBk+&`&`ۥpZq"Z ϔv7ǎ6bkF{8_aVWG :Ex#XUdeା;Axm:*jDb* % A;x"!(eN)#oZDoWP&XSaHdWS3o `aAa/#bX; 9I VqBZ߳Gw1cUT'QȮܡ;W㴱:?oVE\d,0h*ct.,?p) '6e&Feir@[/5^#ϺF]ugf3:2憦􇧇-˲PV#qZS7h evyII4X HdBKl9$x,u;"E RreN03 P N+ (pD0AP8@2A6rm1n7 26x#0/C2p$u$N(OEHFeBš"tZ0&FQCߓCzP[6$xDbNS Jq$52r"-#bDb1(HD/4ʻ`- ı"0dGXW> [Z?Ǚ%Y50K)a' q2gt,PC? D~Q"jGחaqzӅKM,iUSRh9չ 3y˅^ƀfVšUb1gڊO.>.$ /aV,qFhz}_$Ǡ={u& F$w]mָT( %`'ş̀-yH0F?[V CٴL_GG8T >Je֯^_5eIq`^2="SJA).B_J`ȍ! H ˥ ˩kȞ;I$Ɉ0dxdOTxm"&I|c˾0"}(U*H ȱ2}m Zijr .a,0+§+DM\:S&ّ~ 9$<0|HҠ Q36ǜk3RKZHm勞3IJnX#_x(yk$挀%g>kRq=D+/`Z+ -ą•Fs2EW%N憃L7 ˆEU<^=9?xmq3ɻrg. {gR=Ʊ׫IoAͦ?2Zb-1fH{3\6$> YF YL"p8_O'zOt2 ,ܢt#ZEj(:1j`qPQy!Q@'OHg4B.[\\W@82mٵ+ /\^ʐZk?*6FG>Yk_o(FIUԃ;ĸ$술sut̂֨YXE,@3,.g l1u@)a(2ڹ-wƼM}}]Qo:kΌ5Aʅ)nvOCŗ3Pߍ_w>, ח~tu< lVaP eZɦQDk1hFs+RPΒ?gS@\9Q=0$5IǒXFg'arroDuAǭ>[mͱϟ+;yjZ/ÍsrM9\xJ -n>.;Ґ%ZW>RLSNq飶ěU`̙KukYdkkT=ه6>nO~U4K7e@TȠ >OP fh/#Et$LEt!"'U2`I!ˁ.8,r!c06S:ҝ*$!!c}9@A0ylc/Ǵ (QHLd$&F:łHWi!%LR#1s:-='=kؐMEVAI羫3c޽#jd(gN0=7&վ<d9VNFA?;kE(򥖜N)bxЄ?_qпA%[{vsk.y[c@!fH @RY@SJx->s# tSsvB1)tYP{Pr(/ t(N7ȓw0ۖZ?Aiw|KCs0=&KPڌZ b̀jt RQ R ^\ j-V* $52oLK2'C@21} A-ϩ49g0%sH[ 'ŔXT٬k:Ϣ~A{e,:uyi/?^lZj3MnJRyOkZlXmpl n ;[;@gėЦE-oCK{ ftH`RGTRJmɌ6>imw1YaR~ƊmЎl*)ָ1f=aM @#_emCh'o<}!yUdTCn.Np"kFxF~; V@OknӁzX"ˇåXfb*dZt*SWB48* i h2U Ue«":BwNef\;\ `>kA =agr`2 N)ϸ#:-Rod|ɥMm9(~LF0..Qsho߰pLg@W۹Rq/P¸Dw_;_dAEGEY?.n%uWFDB Bt M-QdۅaaFTkT(X0JWw=1w1Ktp&m$bsHwVǐZ }+KDW~Z\0M:Zvoh}*u%^M)~=!w=a]',XļM#/h#X6k7祅ÕߖGd;pZ 3-CKK)FD§;-i]a(#:iOw'6D:ރl28Fy #*C͕dAY.0V6“i8>,Yyd1I阥K)b΂wDðqp|ug8)⛅Վ9o=<ȻL8Wֆ 7 _$3$DJIE c @Hmf9 ~R/w'MVwZn7l9n(\@ >+ޓq< %jѕ~ü )y?P_:bHR:aAqRFP)2)/v{-fcq{F!.*|)xk+sL(QmGs+:2,"1rbrX+ h$]1ُv =-@<>؄O0m.o=|$PIb+W^Ha16o_g>oLuR l=tzvsL!qg jp6p{opPN :Ȍc*[Z%,2JzNCWrR8ƕ<&X`T9F$㑅H>HԔ1тFQ4`p MDp~@0n@:NLvfg sLø6S~Ӌ4*kc&PfTetۨQ1,%! j C=\)c9'ؖq+W[x@R#ATGD%=AH;=x4'\?cRYqڃ~ {v_ŗω.@*kc)qgiAMj}rNݓ$E$)'MEzÒ *0ɽQ8"y;즂idyF>{תb/{6"ܵC,9^ZZQ|;m?FQ΍NG ?gmBɜsCn?LgNXDhO@sI^:d9<²RM#A(He\;J5HI $V-. ݉d.3V̻a/J-"-R[%[`#EګηO iMs:Sow.k6s&Nt%.t*]VW9otVf߳g3W%K(9M }F›+hoQszzHOכcކJ u*(K6bi浉4ß|QYZ\'\, wξ h g{U\G(xAVH#fo?{F俊vtGe G07bwqdQllp-ɖl=:V+Qdu=~U,VnW54 l2톞kVρ: ҒNz7_޴]&.4i`|eD@V{K&ŤQg( /b=dU}d860!8pڸn8vaDhN_rNp@H@?@=z:uK =fS@,8<6qza1H)d2]4PtF.y@s -J}ŪnmA0편~[oN^rPϿh{_xixOhØj-_]-]"8{3 snvvkfLFF̹voF>}A|8_.TXw,B.&,g0i_h!yU/WX5z4xys; -5ZKfuR+-[ϳZ5[I:_kj4f9?`?;N}^g2{-T'k/2wSUyYM(G#vKbyWSUoS1=ts{~foj~'}>߰i^QOYȘtΦg 6ߐnbhdʺ>J¬|zeYS!OIRc%d +w,,BϋJYnTHٰgD$滌5L UV6 <\Wz/},yo޸dDE\M[IGWr;Yx)Ϡm7 ـf?~חͭ_{~y^'P@5&|xR$ J(I 2Y|jr^&AFgۇpn݉3!$c# Er$Z(0h3mfF@~4k`8lKuvUNJzNJzY>VjT+ۆܔQĶÀe%Zc}49%1ɚ)o}Cko^$= z \ԉneڛ{H = W\I:+DP Q]v &TLjRl @k=C2HԊmB`QD )Rq2GcCyXЫ|0E IĞD%u2'{EKv|neN5t *\{.R3jp&WHy.[3PFk~=Neg ?v VҡJ8^C@†A&^0DR9GƪoE%'JȒ1(vR)&ӃML--,-} K9 (EŰ(S P {R(W}Lv(Zйڴm­26fb$.ki|iOzC*| {i:qru {٨f JQQAlQےΊ]$eH P;o/;Go/Avolz_eɚA|s+(vYoHM`/9n_/Ku}w.=yjǚK޾Žbă:tvcD͊~Nh=-ֺ;[ zOvoԮüL=x5gP1 q V[{X;RE]Ґ94ڙ+pyןn}]LڲkXW-u-(1x{޾X uMKϧ'(>'l&C?ZO1Vw|:Gp:hݡS@DIZL7)*HF{.e299'<Io kNmr7YޕKJ碥b"%$k|)eP@,,K1tuΗk@f^bf}/SO7vCƗA^V:%U :,dBvѢ(">tp}AVu71P59L"Ѳe!bdl؇= 26 T2e(IR)ȒN/V~2!h9H iRNZ wW׏Ŏ-}jUN$B'Z{ﰳ}n>Vþ[]; [siCɶѠd?w׾"Թ1w?d~?ɭS^GknbCۇ~7< ]p-6|K~N(?<#y&u'sCRz%xk%f"67~|g@y TZA90hSXtM*-R"(^YXdy>uϞ)[?^rf箣*Ϸf,!oZ|A1HSO1҆e$ S0} %LţtNZw:d1'-k'ˀ߻ӵϼ͐|pʎ䔅ZeWui ;c/ZFb]Ce2n(,?ɉj_6:)UpCZwq˛a&.4i`|eHJD@V{K&ŤQg( Ii/b=dU}`0zh'6.g뢛)N1[R:hN0/w⽕z?#=zcZG&7 \g A Ґ$BSGTrk#k,`$ O h^ج3@ t 2L" =!!xK`?Ai%_f;gsMe0zY\xs-:X JseDz@o"c2JFG hojA$L*6års̻t293a{WO IC2%Hg5)L`C8BDWҦ ) ~ZA-~' ;lZSJ6QjF a&esi1xctk|G)x=u߄'Vv,źQǥvf]=mw#.fx&1!vQ+ {g: (jΜAjPI֏rt ?zT"|S,|~.|e*12"2HV|)Ct%o QD`O<ئ^nG8z=A?Oa8\ېoi[ܻmnLiDMGY$$ &GJ61>=d^irߞPŇj l g͐^zK՞NV4>'L#'B0* YL(^G3~.ƪYapl7XvݲpnYiƎo_]4Q#+cGTavr1}gz8_!r`};@gpn VElʲSpImZإ|YJX).yaޛާYa@r~w%߂pyp p!h˷8\Ȍ{swTrQ'?JOۏ"ŷE&$ffd}KGDpoQ(pc|*:ߕ@0iC(XK\/Sp#wYea2/Ox`!S~A[SbDտD{T]65龢K)r,]ߣETU;9]/Vw [%po-kLw/i/ן^%&{'0 |i x l xtf[[S5ksՁr_}R![pf-OᄒGuOB,:sbH,K_ ,@)USiXa> z䖅7=B_^ڧث;tt"t\ut42VPkA3RrٷXhX>Ke`TjZA #i'|ya#,p_.hR;i\f왍ij9n_̢wq a aJͯ|obK-7y[!K ˆ^FcD2Y+냉KM-a 4pH~rNJ&zZlj_Ė= y3hP~Mu{=JQ"7Dx:t70+㛎oΒod =!"=IEVI%X+Q"!%K9,h67Iw 7 c(=/90wuXr+L'=MbլMM4g%Kq5yǗ×8s gv1sò8'f)1SDA늩^mJ-I*m<?a-i6d\j]y'Im;Io|Gx-}B9&͏^y X}cE:fv&9o5F_*ۮ3G^RI:zဪQ=V3$S$86;nTOvҊNht:Zp[:ʛBQyxےw6|+2a-ZU S s3?$"O9ΑPX-n ~}ٶI"g4`)SƂd#"᧠"ɻӦ }RWG"P(9aXqHݰ,(% 4F[M;8rc9xA:]V 4-orr7oY5 gd(4H c _@Hmf9;my+FRaA og`j]ˏ݂o;D Q,ު}'x 8ՆQW-X1c$AdNXbT,dDjL;]g)=.:nJjR;Z7(R|ء`4QDCY!C&tČ4f:)Lr 8MH ܦz :2,"1rbrX+? hhl#$t& - cGTG^4K?BlwYP1 &稬*ļA-3D=f\܎#p[moh\,(uRiP&uRCWC*pHUv41J_Rr v^;,҅Ub+f.Rh2r39] AHRVa S띱65GD佖豉hj4BZ"Z _uI`+Nt.[i a>GA&Z0`~ 5@S" aDHz ʁ!Ij0E.(qCvYI(JjR"ڮ1rh~)g<v& M3J(݉#E~uʽyLEZ^CË_KŗV%Y{B})n@))&I)Q[M*Ffd F3\Z񭨿rH%𲷼Xl'(]Ux23Oǣ|ɀhd'ZK(Ir yI0R:" avrӲo"88fJAb |@N@kCj{iLȉXcoǴ (QHLd$&F:łK` g Hc nَ:N8혜9L CYYQ{W=8_* ]8QlSeNG._ள 2 Ƙc|v;n;kE(򙖜eN)bxЄE2$ i̓_어Vna#uѺ5^kmz1.6W8)n nCoJ[ &+ԞD5fSvDJԑAg7m$)^d 1efE[J,ua]љ& ISD;g3\+(v2DPZ ALb)jѢ O'84ֽOY[_1g3u=Gr;VV-OnMB ^ˀ.&pp]6-V#>@ni>lWuf^n"w=0NwW__h5&37f:,1q+N>Ҭ_=ucC1o޸kvek-Hauils(c1 y%RiљOg+ 7'v>ivg˴P!K9\A*GEGgw)*)5c$pglTk(b!X^DG f!2]ѺGx9OY}Nz~XzgAw:n;u|fWUCF vౝ &D!DM9,4rP{"V+c^(,B:Q4*kt*&@QAsglrD`$98 ,W_rnSd]Y', v@I9'33s '7^rÜQf YcI"|,M"zqsm5c8wX&ȹtJ[{gQ>OǡΖ@LqXtlŰc`J0sHg[ES2 HX׀MLryå O=cF@1gz4‚Swhh@BEb x!;k?gk%:$4|NzY׻KpF}|ߦޏOw%KPw5?}ul::%4|_|~p wH|֥0a}7QF)picaLvMuBZի-g@ 7<^Yz0G"Gr?F,2W^?=(ҾąKn'5Qik3L1ɘD.(SEK#xu3wgXxibϣٔ9<,gE%\D0@pSAaC"jTd< %‰}4X`%Ƶ"ap) .O?X>TxGDJNcpoɜRafƝVc  0B'Lo"{{*?; Q\׎YWW<#0/C2#qByj-B2j()IǭecaRR #_GJ5H" gzH!6P$0 AD1 HR +S{k,'cveM_p/e,5M5υl pލƾtI>=܈'2 >g!YY1VNj[[mR hPF_v`-T-Hc˲Fm*ck$uGE̙<t>RmrNzjGqFדI=ueQ0 qg٦SWF6=d_j)׽NF;(>?M. 7}9E9H`2_XbF٦FĿ 8~/]?ǔ?VY7.<ߞ6_GZR]c]܈m%z1J>rOkn bڲp Hpnl&Fteu= a~Fy;,O+.)08 1A< = #cl·M eg4$Ѭ E}e~ܷ0ml& Hbe p ڲznz꒦9c  E1J'1=Jx)BL-IV\: `h ETQ?ig<; {?Jn ̿0C)e] az?(Cg(C5;z&2oY5^:Nl}GF7IC|tH:4$Dg2U@s3Jэ'ԗg8ݬ?F 'Y!!0aAs1; ~0_:t$6ԝq?E}_["_.}ىh{+[0'ۼRtef~5vkRx>fcZ*n*n7T tL$А)Jaڠ5gę!J<՞J uJ& U5_St-N\Vqy#-:PzgWk.˟Wqُ\ Jd]ڃgVݺ,6L="*av)oP2桿-27m/,ZnJ ju]mpXcw$ӫ,CZ\Nw<=8ea>7haNە+[{ڼ}-n`m+&w$|aiu^Ѿ)׵yJ5Eȋ#BFa(DNxz.Fpϒi>9:uAH_O'!t FnE#0Q8y%GWDOo gN^k}>^:;YcܱDir)j4Cz@V-6omjTL{p<<)7я߾xL9>huE@p7.<ߞ6_GZR]c]܈m%z1J>rOkn bڲp t[mWӍĨxa_a#h9vT?JS,*Ġ87//.="V/0U-;S$%feحD׎h*(+ai 1ii"VG'x*FA$Njず ̈~!zM4T4n,iMzN? 8{7g8^xq6-<-SK[zv|d R/>q>+wO. PQصF!#3$ڃ|aiDiw[\mW?uFҾ9YzO!XL}L 1/(0RHV'S.ygҫLYgEoV i j|}:pLY$њ;T N;*x:){H Td|U+]=ܕX~{{m[jԚvs<9kw5Yc=ɹ=1ҡN3!qI CII8 %_zIgP.aܝ_lO?7l4 &ԫ0;.0½ge%A3tBQX &ڔ+!o~EKRiH*)EcJ:cfbPy)-c%·$cv$**ƾ"YC QZA`)FDY&4w `D e+**v"wsBx֪h>קm+صtJU_L/.G@%쒂Mj)0c{ȑWiśye X9ٙ3&dt˒#9}-K%+V'VS_Nx@(Hk">տ t]"xx}my<ădsLAb,C"$E sQFT% \[u+\뒣q 题RUxJHʠPɛޫo&7r{9Dqݫ'D2JTE 9s>KPڂ0EPרPmJ;H@JawV^֟G| -@%];f\u׆3aKA]RTۭ;o3N;vcc!Yc IŨ"8*Um JIn5[ME^&,NvB[0 J^q֦ssdhpXpܮ8?MoA~4 c!˱~xJS]ƌ:.rH0H`^ְOBƊl0¼3xw4hkblܐ ̡b\o;3Anxy!%_ /'[bD̒bкڃJ /8vVL֕^x9 I$ˑVYrIc@a"#X6ƩNhAЪ7dgVcS(d4FIJX3" 1,#Ji $i3gkٲ(xFïܡTiQlW,e\Zc'mW'񐁹SI2[hCPmz{4 `U5uҺ=m`|C6-~/lͦCjYSyݴWzh|sEJdZnbotSbV%Vs lKjwSRҪ7Ԕ_|֮_7z}xt7[g?m ( 5e0Bf2>DoQ(e;y$_KHo=|եw7o&.[TOe/+i= PwuIq/iT@q2 X $A,Y ;y&2=]PJFm.NkqG^ 1;vw GGr=awH4w.(ͪ{N2* q<1-`?*׶ԉ\+dPB:":kW {YS@>e5Rc㘉2*' sZeƍ;4؎S*=ڼ|w^ahӟ{|4zcͶmه^PzǤF/ҤCja̢YTKmJ,GЌ+t˾FPiH$J/8;kk*Λrm R[|7 ekVc?auAHÐij$MӗHWRw0.;訌IrZᛆXPOzgg jϦ:'F,)#2=Z[j0l}5'lFyL;n{[hĦĜ^H鹐KbtpQ&r) gdNpRӣЙgȴVzeDtHyd0siBm#fFwBE0qAR8`jQhvM@8@H["-dېċj) LE#KNfAd $F02`deڔ1svdl=*ZMt%rj2-;В-Uv]`kv^'8ܙ\˗dA%n\n aV~B=PМuB=P.Գ B=P.B=g%BαXĠZj%Ȯ(YJLzvw]~]{t]#*jZ*C4enGSkpr ,2 kT3L+̐GH6IP@Uq*FQ&rL aER ZU񸢿=Mf\7&d)K/W啉&]ʥZ_;ʌǻ dyts󂡍msC *@ 2hJT)f-aEL+C85έ Ռ5Gk$gR?Eō#:3Y#s;4B(qq*yB !'T)mnYpQds٥d1!IAgMgK>kf[{'MJa#$q46R#z9$*VT"qs5t,& |~.?[#W%L{/+)b &E qFjpNZ0W zzW|FGϥFa'%0&!ui6Z8d -$Q9+AԝลَbcquB+'뽎K׀]9y;=k%ל!no'|3FMg'sm$KiQ3 " L*d&cRG1vjE|caHZf=[JCyţL Ɂ+c8G!D J)8-x6أ=XŅk;ZrgtbE_5&><ǥk#S QQ ]0[%=Wm)HPsE,XⵖNrcFw(PV+;2u5i[Tw{&Oj?8D$19^(dһt6M=7,2:7'ܧċ_[*K~T;B);db,)-E6:ok:[^Gj^|5eGNF\Rӧ% 0c1%DX'D. KN?̷Ku&7'yD$amlW^JQY2F#'FHFVST侗"mjk5e@a1 nCNgzcBU&,dEc6T;e!EB΍FbGBtoGa$Q2CDHR%2"7"X$11+=AV&cv6Shs5I}>]n֪? ~:[Ȟ+I`'r&KS;4J$K,x<) ;h;(^qW.'L͊^dCuɜ,q#sKwKq: o{ ӽԇzl3,yEX >10A[d֥ko!=<ջ?*tM]KR[/Rd$}|#JfD;r۬FON? Q>'nӈ~uavA&tGR!itt;jeEմyq3ү_'?5^O>>8\gW'c9_{]r ~94`4x[7Z?׏tn~mL* c E;Xvhm1{ɦvT6:_Q׮ی:. i!e`Mᨹe|L+W#x5f^bNŦN oz {gwGo_|otͻ#.iLuSj)k &pe~m M͇Nm3jsճ f\׌{ 8+Ь IA|>6?mNʡӅ$W@W׃XWh構YpE3\$46" Xn.!(&iFZWƀP2f+`\qVNi9}yeʚ8_Ae)C~0;3މ:I@ Qn @AQBuY_ٸ/9ƾKQڊ#gXk&2\3%E풗8W,lIL{qTz"U{nL>50W&Dl`[L8o ubsDt GƔ8N"NW.F$^yWԋZtZki⭴HV6!uxy,-Q"Gp, K&Ne<K] }F)tM&5 ['Xtc}2ZxNKPA4Hp\=N&XNNJ K0eXRР pzu;3kPoK3$Vaۆ_ Dda9cUA*TN9N* BL`"S`8YC!&%t$é#uT>b+vd}I94UpzbjM`DYgD;sro:׸cD:G8r@[ʥV/!y&p`i# @ 'ΈƱՈ/mͭAxs gH~vЮqqrQ[$\!\sS]N=[q;Q*cRmV *J1qpl 0cH7^Z?@U o5|ʙ+82fe1f]9t _]*S<3Sl uz<Ãi Izz8vQmPO:RK>@h ,tږqFLFcV)wNRn^%*_1$e46ƛ&_}yLyfTx}!tǢ{È=\;4Nf:#qKb̮ᑇ\efVվ[:6a7zQ)L"mËfq¦'NYpXA@'}틾uiUʫBQq82<_2hn0X撡+"eFcd1o et"kko +\ω[`O͟SbWnWH~AtW6iM#^P+B8EзWCX⾟"s"{A ΡD#EဌyPnE ?>m]Vj(>y>ǧypS <y7σt{b3ww)˧s=6,r# m}78Ǿ*AY9}>*{yp6np,p6'(nԖk 胛 N4mh\| @g R#-CQ$ToшPTMp-=W$R+CS! 2 s&7p >I"V"2q<:t(h:pSQŵ1?)[4$a% L@Tl"&I)6ʼECo$K7V#bxia==@Yo~s"`Edq& '͆䬒\F}.@y@J#kxDTm.C|bhNU r3XN')M/KwFX"'$od(3^ZS"XT6F)X;MLq\9P9x$4J@lr0kQY,)1ʃi~RwG7Fr`~(>0`X es F:")Q$wAo!Zꉏ(V'S k_:}lgѵ'yJp:)B)o9 8mnvcWvlS?_JâDs)‚L!1B^ q4tJܪ$ 6k9*w=!sk퇍u{ɷ*a^nПL h-њ%#s,T^r'yPcm&r# {KȀ%ʁzʾ"KB*(ÔpE"F̕FaP[ȈTy+1 RqQpFL8M\p^($EYpO%xlYrZiaG`-ijڪUY)&Tav ]8QQ#.o>]$bLDHJ 90 e({٢~'LM5#3)9B{g, D%I9Jz?p 1U+X N>3:-]w]C86|mO/QO.`OmN<[uѻ3^o]n7-ȟ>yfzF}=tG;N.u`h|x fm9&^pի'Qͭ'ͩ4w+h|pS;gֆ92RGIBBjUUw?W=G:Fᝧ2&Nr yziߥҕޕtn]u1),RdJ4J#`p"GDk6"%al`oHN;kw"FƭFxI.:EK4 -0Ub+q+e= ?{U5MjU%U F.:yE+*}H *HDwV#c)G1xNY ݇a`yOlgshO`Tq?luƪF$ uZ dS^Z暡;7v|z> t?߀I5uoy.|ٴo.zM8L8@JwBY^z~wß\I #v)a-@Oz|E,MPůю g@;d;݉g[{{>_8_ b D׏}[{;Ȧ|M} z~[`f񘻠 6KsQq ygLXA =+5rQtck+9o6SW|mMWc2.CjϹhޒH6( 'A:5|};-saU(滌ݫ.wa6tµeoǣּciWjOiv4K퀢6^ʭ~Ɠycow䲅GΞ&4 v(E8%8ZS+ 9j2< !9`N9oiavy̵x.e[O3.߻f#AgE!>.`8%ms4姟Z;y/LvToi[tŧ޳q$W{^&9X!Z"L _pHȑDĒ9wUW[#!VnBq 8٩GDbyPH0?܋o}97ab[ JObq[rt 8wgɔ>p@ONG韮?G^D+ꨈַZ7:^`JA$&ҵ HU&Kh@#RNuPFk!(\jW8xKeL3R!TuUyxAGt&{S}NQ BR(:%ʔ݊.U\Zh=Լ>d-5w󫇻U掲P\tK= ^joM;A$Qm4F85 T1Q =vFU6OBQpOg8%-U<q.*< "@#1ʙ'4D ՝ҙ#q8SPDdF4aeB`IAL2(gyt8;YMW6_lCT| @4 |rVd[kd7J IvI\/D[@t9,y؋="m': E%>1o]؁<s u"qjPѢA71X+dOݑB[\Jy Kc_ʂ5aؤ,.|w,¥'kE";'$ _<+y2HCGNQ9'څ˜5yu'ogg d|%19-IIHZ_FE>(:`(3M^"PiCX,jbd<įN1uyԅrSR_$Wx)I30t55ֱ>x=!Z#歕F1%x@##aP\L24*dcrTQ.A|05u4иp{cC3s9VM4EKA8;q.9[i˶nP>*fC%aZ:h JY-K."QІ% ,Di4*  Û|ZiMYܞM\)[6T8ZMFLh?\({,=W5G}uXwA#c Am D=!:YPtދ@rzfGA(\iBR@uQP9a )y8NKVk~N  ,$kM@+~j:g" 9Nf.eF HW:;M5T[JSլA b?5/\= [9FZ%W1P1~]gT9N$Nif䟹2 a ~SlG2c ڲ?:'enm%M=sFSP$U-QDyrK $%z@\o=wdΟrqTlq[\zFMSs5I&*M@j} Q '#C>ID :`")hli۶AI{s͕M-;JuthNY:xa'©+h!nֳfJyAds$*U)-&EG3ZPv{lt@4̫`]Tqmi` 4rzb, 2ڱ8s+z<OG^;¢&dtăNΆLDmN%N.Y SP)F" LGՎh*GD+ai4@!ԑ`/ ucèIm|&]yô˜uA'{ձ}4DdrE.2t4HJQ`+ \ZOz;"ޓ6_֞>:`7h[ gfgP}95OLoXn)j E&J7rf'׸/nB-Rw=X_f˙Z;ed;/N22(N: ɆRʋi HƵ .3҃qWB #Asڃd^P;iar1j0q#/;B6w-~gBdlӊX:J[ǹP&2wJ2:A6-j-Kg V@-[@sJ>Kl&A ڨ<QJOuv+J}3pv83#$OU^~aZ|!iV n_U>?J~95w'h{>UIhC(;֞:ߓ*u 6=c4xKgoGo?X!Ic~F L/c1U1'лA>Êc"wwt3*_61q5x6m[2[<#)S5 RӦ:׶}a*'!-5\\;6Uf:G!2i _:?Ӳ@M>tmnR"ǡe8nx4t*- DI}}5+7ipq]/JoG|_]wuR"2)Zq8i@9]+dDn{2얘;?*^UJWSRoզU/xbiZr,y?O0՞RNit`냈q.FăP-"*75TB !(aK/V Nbie09 'C#*=:=;(~񫤟'>>rIYd/pwq fk3ÏŇKSldj(V\9zͯQ:atMA d^&{YS)g ZRf%1rrq(U-*#ORq ]TK@/U\w*6ѳ9{V^o=U0 LYSox:kCq,)--9LmRum~3 gW[㸨(^}KI#6f.MRc迭.bl6Q>S''gn3g[ ;ȁKcYD\+n^<+9Jsb%MhK-M‚MBk pMƅF%IMx &hkp <59#wV(cfwЁΊ*rD&ćoܼVGleOf#pӍOl#HRpTJC)8^z\(/2Q_ek(&cv"]i4(Q27hsmnI7-Fs g _=bƬց,E 1H 6UTOHt(X)vOoc}!T[:M/U3-h+ mx!e'64/.`Z~GLͽɘ@Bq@vSpBqV3=ߍi|[T1WK%cMvK~=jo1s5o{xs1E5-׍_ (m#GEȗm_*@p&f] Bd٣'_%ٲeeI$]b~L˙+tr:ÃP Y%Oɫ]Ij_?F/'/y+/ ߬ fVI)}u.P4ckg$+sRShP1*hO'xDI`lR% JJuczOw=fN4ɬ$%sڗSSOQ=9E9\5ZOi[ dqrQ\[^);]31fUgVlČ^DIQsM \#tUޣ"]\YKbpgOʍFg_F_s+3D]Lz/(&hM&n,fu)`lkޯ6.x8&eʁHxbbA$lФk 1qdawb=z}CC&cֆh@$}tD@tVpc-.i=zmi/a[ft%AE8:byW[( 3&+ %M*U(u%8+HX^HMNs> v~ZӰI#;8:W _ g7G!uPlC|4cU PX6rY%c*6&ٞ }9C[f^BKBT!:e%ypȓ!&t\AM^tDF0h}EDխ^ٳ1u9 \n]ߗLi[LގF s2dn>*3>*3=!"wuv7a~Sc6{#m u}*`6 C>08J]Φxg0kYnop.^^Mb)<힋~9xvw`oӓcsҳQ#k6:K_h%~_?Y~6T,nQW﫟߿z RtXe\ mUrDx~_J"x{.k!]yO84dyM`Y 1y`FY:ڲ&ͫ` %.n7יMo2{ЪfQr4 ̥P( iv 2z#ٲ ^l׌c\`r9(*,s ;(9"qqgY3׵MԹ >2a(5$@KF减`wZKsktv|ZnW${:b%ՒԨC -lni0"HXA[E #JB/:}Pe]"yv10F6L6b)Zù!D$G@HZcTquh̷hԃX21!3I!h[2f2b Sc0Aysr6؋NĿbT:@҉itL>I Jް]H2$fC!W@HO;\Jq}֔.:?ł\XT }Vr8Sg0sW|"bp)ʔr$ZK?̻TPӣxa^u̯4ާY3 L-O24X2'.[2{ ^h) ө] VkVd+ƒwUe֦ÍW4!p8QGEbVrbmMjpd6K4o?k֢ׯofU$ ËQ}ֹ:~Pdgވ$vI$uP4Dۋ~lu^\x{yq2"9;A }hx|2Y73("Azy8ߝԎ 0Nۺ(͢|$4>k(2EGWo/V=^[!zm}V {>@#tH|r6ij]ִzA>8n,bFŪF5Ŀ ˃0Wwc@dK 2E)0 8'eQeIP:alu}D h^uԻw;lRP)u|HvaJBgmTuנd$}M]*bhy!& :)Aoi%t29e _xI*g 0eIfݩW`kY&v<5.rأ! g.|h} C]*_?k崻ԏ rюؑ:6%CЖLu'oftN7V¨;2'g_L38~P/st23Z/ ma@eHYrcrYF\-2qZu`ֹ̰]xx.>o=aZ#o~jOa9(w8̵vjct#`ֲ7=n7@:&ey,i  6$VNJ<7+2wAtRt{eZ͔{M4GԮi?<@#ّb+t PեZE*Wp;+m"aռtg[*8N.iF-i^ޱebj“-a>WhNۑ+[m޾g{O:י\x7oEۑwd1(N UN٤W%\%3"4[[xv<B&k~u&Ov;m<`=)$Cڴk&.%KD Cwj8W;'łQ1WU%["Rb|]йޣc8Rx*˨r&tIi1HМd8,Q"$.u{^ovJ^$eBGưHxʁu0,kd_yV>Mz 2KډCߨZ2 T(2= 0c%UJ%ZX3iC3Gng]P`L Se!mY ,yYlB(r)N!FI`І  h`R l)O9㬳ᬉ=&@0e0_!%dOٻF$W}> Y`ciXȌQLR-YiuBB f$X9f\0*J*Ùz:]6BK|x J<dtїb+n%WdxEYK|w.͗;t6v"'/ *dsffgAְl@1sHHcVwID#}; Pξ9}㤷u& ' )^Ai~n.[bD̒Db*x^e:+V]WhU2O2Wh90ɆQz:$h s bAG*8\kBQjͅ:K K77YeGsEW$hdn#$tuOBZ|ʒˤʤ{^yٳo9}'7gP! a NG^^[ A]jp}' 7iЉ=qES}dc0C[0Y^}\!ޘ.BID|Ughx%$8mSQ嬂d!ZLEw#kO.) ̻?0nɴ3߹Y I0i,JVd3a{|ʣ&fֵ!NtZqXNsE)WA%fJ ZbB Ĕub@mmpXc#3f"ZI\&D.SД  ee\siuON{Rf&Q;U8zQgV̂ur1.@jmv^1ِiE8G懅u9lUMWBCU&R[*pN^܋[x. pjt{ -<[#!y90#gdK2*;בۄ"<( j iqi j-s.[t=H@+h^3M龢oH']'Mzdӫ /aC:ܧ!H\>'0=4H7}%A`ًr4jE;Qv0BWǗq:;%vǣxTIRR h03;$ e1%W)˭<(D`{˜nSLf,dY:(urWFkiF"1V\rQ)ˠ y0f?+~ˠ㳲=ZYF!/(B]K Gc;jYvvQD6zZt˭{XLy&i<ӬgbÄQ-h.M>n=ٟnz8gl󬋇lY=g~9:p!ebM^Wi?LĦ%H%e p/X/Ox[S|SL|2i}}̝B >,5k@Ro ]but&Y^Z -KgQE^Ve_`pjA<4,Ѹ'e6F%z cQ=۩-py\7?@{_c* .htdb ׉9'5X/ϣ_i6+瓋  '2Ld@IE`6 L< \" Qޑ|jVv#/Ϲ&#k]Arsw^1G]iEJuAVP^_KhYwl=sCQG7f`=tIzvVɷ"$|B|ӆ[؊MXVU [gn2g`+sywG3 QKL=AI!m3xCP"YM5Eeo7qe VD>$CXBZO!qsLs)Er:`Cyu5HU02)X@m* & 3JND=SQvR{Yz!H))EMQ0H!A;fIa0N`]J.a!G ًg uWMcYE.(>({\ 9l!{)d hgձ}t#&r%[7P&=U9'U[jN8O:;",^e>qgP<;8n`YgJX/i`]yrUx^ԃQ 2R A1+E*:8K0L[e6v!W_/g;ZHT´6e\Q|g pp:d{q ten@RlU:: :)~tuR~~GΞ(rH2U9bT. KLQR'Cȡ;}ANwVe`P FggE aڇxzzØboR,n7Xӣci海ϯX7uA>~l0aBtqfOEȵv}~;0+㛎ooUYHIY,$XO }V M$ADn#d69(@7Ew 7 .[:!6k| h5kۻ44Oᡆ_g*q?~/g>™cvy8\;m3VS>0OԴ|_15ګ STE=I q]w_dKڶ Wxz~!¼e'Iʭ;IvwYnP7U״=<y s#-.`cϨ{^Ҽi"]avjC`tOڍy#ɌL z/x2 VuYDC_E_[?;ݬxkmVE]l߄Cog K/CLj#4]~l]bS2Y(\!M~PԴ8y48 a2١RU,^j.T^k aP*y ڑKuVŠEhv&x<Rak5G<`/D6:ʐw,y^ dbranqL5˱Ar*(bRH||;88ss@L8TQـLM\XXc,N"dʃ$Kc6d@) #Gr k B4F _-߆ hP>)GdpѡA[J5Ϫ bͬ փ]Z͠V8vR1[S.\ $=J'iC`%S&auiwh$"YO72:%㑂.+),#9:FAI5[g,mUIi6̬Pʹ'".ì˵+(7!;لѫ0;z}롼}c*A0n@ jJ_׹yIH,U"-1 .g-Q؃͑Lsy|tb 2h5\2TstE5w;[kZ{3 N; 7M.]3?` GRoVӏ_N/Bp:ut~zy|’}rhېQ{2Vv)}T`#Bmo,t2NK:AK:K {yAW1aތ*'Yu7v7:V3ܱV2Q{OW=VL2H'\trDP1#TH d%԰fpNT./TxG9\}y'm**b׈ULEȞAٲlr(›D8ů5WeDoTZ9Fy9Zzu}V‰x2+k[U0zX|)܁_L% XgeeY]\ǍF^+/lWS]jiTVჱE8Q0\E# R%$jVz\T2gK%Ft`%rQ YPri2ʮ l Ca!Z26f4=]~<3uNV8g_iQ͟wٗӓ+rM.i*3LRU (ch0[-lDOMITDsE-]&тÊ664 :M%E)HFtɰfe&7BEg B wy+1@ GI_-R)6J*X$c<sZia;` łf"j<+>3뙙^ݍfOn4lڛ'3gixN5q~30QlL+RxzvJW3ւgp?oX؏KA,f@Ą4pjH1P2^q]nٷ@tp|29ݕqMPK&Am\R"6תn"(t|qHf`Ԁ!(D魏'y<иQcJNN_I %Bq*Ɯ)Qbxk$lꓴ3=O_Cf˛ory٘J.송Zx۝LrԦg#R9e?R1hݡ޳üy; ^ٜ}n*?]FyE#/\O5]zmh~+[|9Чl=t5 QIjW]Mr>y)}/=}/x3˭{͝]!}`|]$$\My!.d7QsGC |і7g1nwPm QN1IFQ@4(1 h1֋m$ʨ1\BiQ.f EeJS@z#O78E0M(7ݛMgK}C'g|=$=>k瓓ϋI_l"e3}hnpyag ?k2M1Qb!)I%[$yڬ %'w^bq^ESNt._>@ )K$% Vz`J 4f~gsjU@^,uqM17%Wݲ܏eɲK9]'C8}ɀc&8[0y3HԀA!(Qwd\?m?WuH!R9eI꣠u)R:cQTAmDZ%=Mm4w!aGaX L[Ӭ]~xqQfgi7?+}}@;> x2&ꢣD^n8A%@5wN9hk&<>c&R@cQ١`YUr,xb`J{!WaíΉezb}~( m[kBw'眗7A /$j G|7u0Lڲ?>^LkS0hGl͠H lE R Z䨊=G{YUQ-?0ث㐠 #\2 - !&0 .T1ѿ{]1N,\\!oo\ݍ0R QT8-?zjJ~ ^#o Dt>f sL™x-eJU [bPJ7 (|(9IĎO_ױmBԋ?oY_l{z;0~z:yamGZ{BZuՑVGZiuՑVGZiui˾=C#zzDO==UC$P==G#z^!=l c;Fg}v&Bx\\%Sf;0TWjH(J2?eL9%0EA . bv`Y%(+4^2Zs ɳ LJhGd1^5fْf|!۴ӳuK۰ Lc x(Jx*rp @KZ!{2 XAHBJYEmޕB$R0`~47M]!2(i4. \*oTknaU*IFGmoOkvd[~J;bף"&w#}t2cÉ\g*!;=DgN)K`K!f6GBONEcC4,|R{@@РSKRz[{YVBќx,XG"qȡa-AglX=JڃIhҭ,3O5d69d&?P^,aA#* reU Vl9iR°EFmXʻAZC`+?6XngZE [ 6T@"F3!*.hvr=T d^6 >odc 1 :Jmb^#D'k=J9)U$'rS}IE,A]%"^P)<(4I |>2^ dfgckdR\ߤ韥^?vn5J̅3cS7Cla}D.oa18 ?11qD]s`(dCtU{q72SDJ`({}ۭ?"VQDXD\_nqBz{8|EE4!:ES팠P%,̾U=E/_.7cV 9OiriΟ~w`I3l2BK(UV;k#Zw9̈_;gӫ›UpvE[s _z]^3rfWEhf$7W0XLm?^ Bz= nX}7Rv*P C 3g1[;ɲuoz핑{ڱZJI>;>}~l+urDΰ_ʇ'Q>7(ow|ot޽??]~2s7޽F'i:8_#AOw-hS]ct-u5rM[G- bUxaY cٗ׃Coxvx_:-mf:A00 b~]xQP 4+ _BD6nWHZ>:xر[c61[SaJJpqF_?5 {1$Cl@F$xbe pe ֏arH=o۩K:/cq++Syl"x'&%<@SHu::2ϧMh5M)[\UW[1eVϒo q1n e`Dl9,|FخG]jLX3dN TSJƳDIO.PhSBr! 'k'x,XkO}$!zNGP#!F}ݫѰ'נ7,C{"&dMdз ~+q)2D8:He݅ N!l&)CvDUA < #X sM&Ĥ4z:>ۂP7CU؍yjr@ 3:$b$l T,Ww *MzKLC`Bi R%hIRd|> +ch}Y3m-ۃAm8J[慮4xg}Wij 5(DZ0+] tR5XsLATŘScOVY "Brld*@9 VkI)<5{vC'c]Ԃoq`Py'ٸsQ٣,f*ƬONƆ~ɫ۩F}usVyql#MgWB +NYpcFF_;4zx&+|{7EԂ)GO 5'WqDWwˬ ;@7 &舍vG Gzy#څi }{S ?VnZ9IZ$|BD ,iX>Sk$gJC)qT;*m% ;m~[=1gLJ617J5v9` l:y][W&~SșwSTJE}ȓyQ&u|{AN>8H2<! J $t x,5=z6YSL%FuH׻(k̾6k 4Y(N9 +WjQ)-Vdz-OhPT?1idTq/аVSǴ;I>.7:(]}xpLdpNIPM.]7T2lKJfjZ#Ĉ 1PcU.x\\6RV'SXV:Sn-ed?iu>8ƕNPJ,Ɉ$S")2Rd-$Gv3d%!(Œ˘S1f^Y\3$L6j-i+$5[EyFs#H`9$"`7&H\Qc:B." B- N@3K M MCH!RtKp4ݶaF{."ZY^GBYIbvh[<").v QhDl@kEk\%)LXul` .(yU*BJ̠prK QVyWG, bYq (6WA wem$YW?`v싁~T `.4bKJTc}N$)%ZW ތ9qn`>cɩb ,.`+*2mQYE5F4؆s(Z$x@ri+w%A`KU#TA4d,ERΎiln9DT28Ut)\]Hh)7VSc2(3(v5-ȗXP0F4DL iY(}R(T)pV2Ab)ЭLmQx nEe06mƂQ,TG''"4+JXm))w2<}Xwgk^s@ QMހwK }Y] P9B(5b2HYSp  Z%tMކFL3uzu78;djy=*f@kFV8-5* ΃t-298kإ6FuFM#=`,p0",0wϢbt",U@k6֜?cy{B:b:K4YF%Gj3Ko^5 RYU۫ ^`VAc-B@̤I@ |^ U:[PbHu+m0ZҘ[?y^ڭNatu|n׿M'esMc&AKP`b+Z`8FL-FTaXewW 5$޵繨)UFln0;ỡ=7fܵg8nJxKTԠC9QpyВ6? Vr)v%JV#; # T"* wB)Wƭz߀$>_=_a+@1\ƒr\ \1DN!F! ^q[0S!7B4Z1<*IUkzY\:\cGܠ "eYq2csJ6Ha@jpQP/m{(A I d:USEce Ϭu~gP*|'^5ʾawU@`HacX:dС 5!W iDF9|%q<{fcvNQ}6DUKo&".ńE@ZĔpIDa8ubɱ&.hI56X~ޮHxXd3-#bW[~Rq:=_SevةtP:bqZr7׵&6:+T]mp6Z5]٭=nTTI9%{jk7JfQj:ܕ@J/I @o=0p&Jyc9fPeKpi Gƿ3Ǡv{#tnWWaq=`":K{UEe+T8J@ijG~Ɵߖyxݲo&>m-x5{YPgUAj<ς4-T7 ERyA{%@y]{>M]!vW9sE'vbtـogwzsӁtqt@u:N:Pt@u:N:Pt@u:N:Pt@u:N:Pt@u:N:Pt@u:Nv:P}=ڒi3WO^%>N馠N/ӁĔw:ȓYQgZ}t8B?9F]24>J'e6߮M=ʖ[=H/7 } T'5PED X2Z|4svő+9u PJ`ۏi__^+oqo0o8_i;:اWkhuGy'w{׸UíO$67ZrGbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'9IlNbs؜$6'AIlna4{#6<{94/Al~] : )H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@/W BmJ Z%|o@eV#(`zJ o?R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^zNaⷃWNRv~\w*%br$\p77% ^+%K.]mRi~x<٦Ye9YEly:ͯz'8v.sº̺SOns:)La .0Ѩ= EOuG XJy3f4wtT! DҨ W4s:,x;m}8Ʋ6L.}Vrv1Ըe6cvq'HosgZe Luldg s i&VWrQ(fO0>G5n~.+9ٖT c~~+^T +uû"j/d_-OAgG^5[۾z OϺ⃧[s) ŏ>>쨛d+R*G) +މ %jX2* g<{fJM 1Wm '6gw-v}=ku5}Zmk.xϜW.~~R6gW8 XonT9# lo^lϾNR߾E0X wyρ!vԈtvڛpAf;=^hadT惱*,du(sm7Fں\ijEX1?bf͜ݙq;Jy\8g3GɅr¥E/&@o74?ؖ.mvmΏgÓ??kgWy#J',QH246*[᳭`b*/ʛޯ'd5<#a1"jcCNs:K"raɈ"\2@ B|b +61"K1(P>4^PQQ=fu=BP,lkBOQ;Y^N8m6?/z/իAoTb~wrC森6?xq#{q0Ƴ<7S.7⼅{y^nmό=ֺr۞wVO)>d2qNbB'c̹R)˭(ĨoSmQs |<t\{2~̆NevЋ*|C+#6OYםFfo`jxo6t8ۇtNf;]F3;yn#N=b-F^;s?M[ƻ<:WhWs1s('I7_ȏX]u#\/O>ntW$}R_g&ܪ#eq'N[!/rײkk/2ܹ!T+][B-&g31j݉?SFgmk]tI #ѨIݩ.h:8kR㮄}4snbHoω{w PTgGFܼb:hv ])wp~v?̌6ChcR VQ0)ɉ׵l\V1/Slc#x4svGFqϖlr7+|Ylw"g˽o̸_˱Oä6\ Wz0৵ܧ"o@=`O˃clz>?.w8W{'ZJ*]zw?Iӯ˂3$kU?7tRn<(|.JY:,S>bf/ҫ]O~138i*n`PTw~vz\p]q]¬ GEn©|$ԉs^>ogq@ (AS}}ӫ:)e[C]4ԜRAwV3MwqP@.oVcWE,wj%Dŷ<8S\\% 7k|PYi%A&BQw={KjgLW^k[/ngN}pQڬ>Wwj[ܹ)?#뜀UlxCe# j{l1̥lxs9zgV=٬4 R2 @q9]SХ,;디]LtA /|D#_!fHKuEɬ3`1P+0EjؔeIH"KmaHTwuGH># |T)Sj?W,IUOt!($r{<.%42=H-nE;PE#J/U "=)7+sA|'|C(W*)%V-̼Uܚ)^lz%I[ɉp\?Qv 1PZzhu *tiM85h5& 5Qn1ʭPCQpGgt )CB ƹl$$QS15A1N#OH+Fg>ތ3@{jDNׄ!@@ %i1hN8;YCWȬA6DOAKhAk3 Us!yR$+E#3䝣  $O_ O|_HJRnFi@0k#8Ը<|)a:U&N9w7$r)&fQ<:e`9>D )*X#dmFo|80&WN탿d|kd7^r8_T ,2%sJY4_="ק\hZC}&Y! P#NШSKR4YGZ!;(Gse츚}u 6 !U3d^ ñgϏK7f7DN$t9r٣sŃMf5r &O,Kz_QNY%4 %@+ JP(u_& %v\eX¼(td# ꘋ\qIB&f\lL*,+aRkhksqO63g/w&F4EK(gǢhs;z?V&hϿ\PybI:[fО(O&&5 yIi.|̃29Wy0u_Ww.;Ԁt[g]fU#F;w1Kpp3veڲakWedܴg-ҬZ0Dh{^qWX;)6BG;Y~~gwg~2so޿Οp&P0lA0Y҂Zכ-͍eiQ˺#oXw [0 qw:9ɇN2~V2^}F2 )vذ~j&pSn\n)rK<5=ΖjRvI33<Δ79gg?'@./cW_ENeuIS圱 hhh(Q-cށDp 8 8E$;\]S2 D 4}(U1V0ü&F!iiB@#5ꪫ Q@Mr!We>4rzn,R.?9D<ޟZCä^D^ݴ4rBaS/_n B e*!!nT]֋v<`iPehY2{>ajWPn"ӭU0{fxMCfw./Mr">^v/G%4J7 2 H%傩DU E1%b7*u,EW'9w{;CA߯Wfj4Į&q=W9Qtn\3j7{ЧѯW7L(U/覣E*(4$ida.֜I$%Δt SD!p.t|G P>f[T.vxӭ!-}L !»t-5y|T9G;<l2PfGa>aXng&((:`hpyY0m@]W E}@=HV~q`YHږqַ˩G#L:.wL\nqut4ɖH,'p7kT׸#ÔԈ T|,qoh*Nȕa}ϜcIgp5z9 a?i+oؠ>Lcn;1G-"ǁٲLOlx0Uuvi_j_ӝ??.xAҢ񈷐㤀oqRK!*r,B[)U *xõMBFN4)ۄ3[`BHhSbWP+$tS^҆K!G B)ԞrF<~9˛yux{ćޫ_!16/=UͫXOx坫_'܊>^.G㛃D>:?>;> \tm[iYH dp,CLF(/3BxMtmoxa Yx=u:yl+<ߖofAxo.{(0vS{F赓ep}Us3zoךm#^ǮU'=z<j<(?_ MǾ;Dz:ݖ:"j2HFT}Jp+1IL8e >-g<]vZ˽<{o٧Iy_7qYyʦW߾ ?G 5y7&zے5eKjrl_=Y bt)z(,g֔?JRN^>/JA#Ϯ Qx9siVN*98'G? 4f9s=|"z;5,{"'e)Z(gAR&%ƟZ&!Q. !٠:-BS+5}& wÙ␧;jRKZ쒥Fgof8"]|xQa ʣPSpr.:+慖]'tljh 28C^)HY;PO@xq.]AA'}.qh*@=ƘXn"=hx*#~9tġI[< )n¨|JԚ>g`mé[UѩϤbȺyy=ۧVq&ڨNyii :l6I9ZUq{C0w H#++"(3<^gIratuz}8Dy`\mdVn><_lpJ&fZ>=(K$F)RYLθu茰j4!\<%a]pL|NU_'S$DC@ hBtԨMX 2b "2%͟"8hirɍA&&nQ IM,rY}ͱ,Ó}s̑, '&mC7V-Z|}Q NNiŁS[ɧg/lQCbDSZDO$yL"CBQ*$ F*(M`HB)C#uLֳ gj{۸_崸]/× =iN[Ž8iqtȒkmrÕdK%ږ5;]Cp.'+-4 +5cglQ{#Jy]gQ*ٱ.^>xڡ1ӳ\@0n\seBbPO& 㿆GW4ld;9 1Gz!TR720D%-ٲ wwҪMmYS\asOx, 0BrȹYc_#׶rCg9{MQ_]u7c+̤cIYUYy8E$mB¯sAn^z[1RlOW ILrm'4ӽm#Ot|8u`!%M lwI䠽j(C ER6т(z L(cM6ZH+ nc==r` )C ЪIXbmmv?/d{svQ|?sJ(d}1f*GSK'B&BѩJSki YC ,c|A:TuB"t,s_?\5 Wo9n)fz,Ʋ4`N;sBE"ulw!k %pQuTο(W_Gqc=lח%tS,/=z"%~Zr?:V4aOL1wəJybU# D>^ >v% |J)IDt^}5%WjPY b4J,YR6dZmZhZw^=*\ 7[rq_ 2Wj̕Jϰ 3$Ffh'a^lAƛ6l'{؁fJ}J$&bSGRUtWX{fHق33iC)g6Pf1A,ӥHE'BZ%^nW9+p/ bwtÈ<}Pa2f8Ӷb-'L=k.ãˬi`ռOLJ7Fu@Dw2w-N3@w"Ax'54dJYEY(=+X*%Ơ2ʓ%&0I_se1ICHgEFM 0)6@h&EgܼI1;7&' xV/ R==kO.,sG!>(^Ż `.j-@LWZZ=I؛ܽIZJ3N3IqJ=RW`{*^틺ԢuuTZѫ種S uUr_U֨]WWJ۫種Pop`v*E]Ujw?RЫF]-_>-\]݊`+ۑ OnGOnE~ԕҽ qZXƾlYiwFom&(pz_x:{4l̿c߱w `euS> ہ0Jq|׈XElAf,Q2z20)P3?*yo۽ɗUu|7|::9OwO1?zPklNNQZ ʨQ(?8!z Q #uU F7Ji-uU} QWWP]i/\"Z@WqvaoaP+ ٫ha0PDZՋR-Yr0?Zg x,u xe<>9|nW oSeą 1^6 %|0 |Sz^'rh6KƏ.LЩ)Y(.5Klr55Jr$5n&ESY QDi98yUM8!kSh={MBje Oj/m,f2y1?y&^_E@6F=k`UrA ^ԚkJg@7pRSjfz"Q4JcC h裉E`Ƈ;_/#̱;T0vtNyލu D_4}\D6PE(`$A2I(UvY#$bȶE9)˕- c@ʬذ1!0u]3rn>Xu?{7^_ SH%^S+ȹWyjxդu*ņM/ Emj0ѸXU^0d#Qpw$HKvzA=T'D%3m՜֌_rHrVKSiuRȇ,( (aYzݓUZ&]7^N,($3rjw'SgVB["Uykc"X:#gK9k?$f}i8>Ĩ|1di)av\uuYf@A`HK!K A27"Х\Qo]HFj$ A X$|ˤH>0d+J,juh:N:އr4:w%B^Ԇ X<`yX:JA3 <ҕV⸣=t%K}X*zq̢'m59R7w/|3f.^GDH]|I 3*)GB Gڰ1g-J *z)~#´fg/z+U|·x@&K"LD JbLrF) ]Y ţ;8c Ott!wX:9:A&ua0|4ď4#׻|tmH >9 olb`Ez2\QաxZE^Kkq((.4ryNG|1[z?W)\Dx=גEr#hf{jɓ9hT.4.|S^S}䄇ꅙ =yq鲸zO-y?_!͒i_&O x?dNVؤ)dmRCgddj [V[0@B@#|\k>'7} 9L^*Q{L8鮪y)VggӼVzuQg~y9NhEWfa;MNgV|%$~R`eDǫ?lq~28J.9Fextq6/fvQČ6lV|1ΧvGAu=\nn[2c1Ly4nu`źG/\yt|44ٱk{UbsAuXQ 2;_}x懟o}x7=:feב$~{p}׳̡]t-|·v95^#>bA'P9_Y[׳ߏG/mvn:]"]nQWl\4orVxFwmzrHˍ6*]fy2͗1[S͸Vey\כn]6ƾ)gLt$׆o:OcMwKyF]q2|1X b!IazZ  :ɨ,IE荺,Zxe) ȯ"hE=ɓ_E-])_̾dȶ5ͥPlBTrg<+e+jc. FY]xڭDx$jOIrowjPlnf5;~E2Ѝ*D& tpǣΥ_;~问Wl(9 !SAh(`IeC4:2dAMlN7)][oF+qvp}H$Xd=9#X<'?$eKnZl7($ͮBN/QAERcib ̀;[w6k{9 ͑7N[e$|x0|'Ⱥ$D(Id5b4_CR$x$=Of&r2oI*0104#X#ʇ( 88:\P 1dTaK@K"RkTJ@6xјI&rga>g/,Wǚe HL! &%"'LnvsRGa2hXGա[+-/u] *b{Y}<ʶKUMch bRcW6(SY8S]|tnڈ֖!-c>[V|ʃvfF|ڰ 62O#8AxrR@Q`@c^1T)b+t zz\TknL|_;Gug3i9Ix$OvzR\^R3ΟN?? 7?môY_޻#[/nqnVT*4MU1πB\ L6ǂLhom0'.>cnW@Ϋ5nT&ggI!U[n~=vӊك`l4g&wy|*es 0bQG"`"RSFDD bM-RDW}<@;<=Ajݣlv95hڭ^&.]\nҦσw?r^"7EnRnd =!"=I$NT)) JnHbn$#9&" YQ. fWn۱M߭-h6IVVּzTY}h\\H,Ypyϳ.BN,XE(Y(֦:ahaZ1^9N2:(6B4d&kC%-/ULy-wLZ^hero]p~II%574\+!vm+f}Jf`܅|/qQoN̕wff=Mg^"f %79tԷ`x5 r>}m~'_+T2Xi>eӧYCѾ턚b݇ڰmqΌ.\\kJ :vuvjO{5Iשi'Cp&N>‘LC1C f3LOU:]q|?ndAz)\=BSD9PsݹElL|8B8lOݟ@[&}Emgb?AfY^0n{|Whmܠ̧p<{C.p7}|Aqե}wP]'T[8韞1<rhu]fBĬbRb*Wϯ0FS` "P(9aX1+ɂ\BcjN l3A'֓*h2Bn~]玦7wE~-'&k Qh*~@Hms.›Ex{gGon]f5ߴ2&b3Km (A7I$TB}=!# r‚㤪f"RSdR4F)d vs(RR$v(X/jxPCV琳 1EAYN #R1G Dq#"( -J)I*"d-hh̍ug3F ]-hcB;$650*yɳ/_Ixyi{τN{3ֽP]'v+zW]6_tu1au(f RkvV惲 v(2\Lovvs1~o1Z\nM_NŦ9{Y7_ܱ"y wξDq[ϭ5zq+qf:O5ܔ8Ψ1;ViB>+TFIw?V=e\[%f#ƒj-V0P,0ăkst`"c8o*aR/1"Dꥦ@w+mc"ҙ}lwqe=vsLz.vk/[YZWjkU =sg1`BBpDIJCnE*r/Sjy EHS4*NcF' ,6ʌ ;6`#*$$ Utdl,Kq+%= νX*:qOѹYƳv7TR@RGTQJm@yWX%I#T>K'U/O*J韍OR t_FU7iiaEҺ /i5b1ϑs%7"JQXVXRV8dYɹ b窢~*c !(*U)sG1%YFKky%"SXЀ][t,}-tLQ'|/^503/ȈYx~qwjGsۈ"Mˍ`>>W> }4QzztYR)R03.2!K4U,V!]Ynu O{A4N'֋5yf2A0ɵ r B($<3@cĜE NVF_LyLg+0ͪ/=⭫k./P@PrCV邞?6+LuƦi0'i^h* 8FɢepO)\G-%I#'e)eqBh.,V&"QL2+RR<Y~:=V(sVA-m=3{9A 3GugsԯG#i,5>4Oa>j`dso4%-.GZ^` 5@L«d4w,VI_>}p:;N(G8i}a(p_bRaF;q}) nݥ?[΢gRe)E3(,=e`0,%9kBwn?[dvÇ' Q<8O UpnM],answ*{n{|WhA35;;"p7}|A|UFTAqm핞2nqp3ozYOA/׃䯧z e#hFвl-AFвl-AFвl-zQ oX"~g7qfڙ1_ᱜ|x4g> 9Qk`ҎAS'U[W+U7ڡKϓ?:I 6/%zz Yihxe!8F8B9Î1<c)3 (T0V$V"!Z>Pt\r^8:e L"5=P!he85s$-fUFF:3pp؍p"F~&֨5>ӭrG^Il5DP|jq>i2ǯȌRLCVcdGK c3`bI\')A(Je,h4v3|;ֳ8ϗy Cd$*TQc`lY73 RT^IZ[E; zaK; bS=S0h"P"k1`߄":*dLh!]Q}gTX bZu. ԝ-A@v/u+]o+nl{1:ifF)bm &'s90w>_:#[ΆmR֪~y2~f] M rt48n-Hn @[_q ?i7dnyw Z=9g;OT`NqgV\RG*]_;|ZQZ7 uZ|Ώөy@K;~L.Ĥtp`B(mXpF؉8ݮsͭQ ;N9:8 xFtMU %jT|&mY#,h=9Jfx f-@U r؟yܸPD\W֐+QGM %Cm-ح^3]uP]W{v\OŗZsWf>Ԥ.P]ٯI)ѣyJF5ߟ3~+i᭨Y?xr0Zom L¬}z:k#/o~xl6aP9,.Ȝ Y]Dztֈ ,Rݞo`y_hR}BU^sD|: ۴zLx6y-ߵN|,פ93vuyvF&?).оnoV[V93{ ^7TbZt{gԦGg1S~\)gTS1YV:c? 66:tsny"[鶨Yw┻NHBNE#$ I@'HcQx(eAHJ!" :`$.Zg\IΓJ"$$SKغ;v-r Eݐ}/82eK3/}&*S~7n=BiUh?@tj=m0 D:\ =R.c(w ;[Z1=e# 3 % 4K,Z̵;ZW2(JԠrAHFd@Rb( mQDU7O޹DZĠ;YsLޏ#4>-us KCg: IJp0F,К,r @^ YRaG]/z >1K#YjR2F->& !Z TJdYJڒ$(gNL“NI "1JmQA,f9:*H')Ru88*3W$>F",{ |.^'MRC~Ig,[g:6(ecι QIK9ttB/z-ܗ|צmAPFf(*-4ɀok4"hJ* jݑGwP8>> CKc_tZ+~h!f"hǥ+EFɵ ظ'*ulFR_c!R Ao9o]Jgm0bxŝ`_Ʈ6aUx?@ T zLfƓQŬ:blqX@i\t-۱v?^YdkkqmZVs =e Z; 5K/|k\Kw?>_Z.OYa$Pvbd\U5&3$Ia"K85wV2%_m,O/t;<~} { ^ꃹ=eR8^=rOƳK\l6iR8dvۏo߿zW_z^*/߿}_xNY0Aw4.YgiS7|u]bkǝ ƠB6OSc:LUz{d~4KDL{8o内x  qwwr!F.q¬rOqprRUe?.'E};ͤ'2hnUc>ZdtEYx-BP߮tOÏ ?J2~CD>F#%5[ C:ˤJM"y҄ô;^YJFpׂhDɂ/PJ_$_|v TS*Ř!T'P[N|!9c.=tF[=Mf}B+w%de+A_+r}#Gcȼ ESIԱk4ˮ*^!h'D>WPz/eᕇc r,6K)h j[㎾vc"5ZEakIAʸh h5,|JFmMhv^h}Zۼ[sw]m-;vq݁GC?яygæ߬?9騋yi5C(6$-!Z)o?&&׺[qv'kMҚ 9){R˭tncpT >*>} ON,e9^B;:n ?ì|Oc";=A-bEO^)cLTEH]%PM&WcTI߸ʽ~ץpAq:p5{&D y:E( :a$A.w%# s E HxͿ[YJ$n@i7%ر3p(vtX;A;r56< OoI_@ΚL-Z~}7gtIZ'i=Ŭ>N_Fxը _ǟgZtZj? 6IJtJϔD哕.EҏΦB}s=^R%ZR\н'-EaVF`.wiI)l2KN)P}m69HSpHH @PɨQ}kȷZ[QfLwWm M/4 *wuԵ~5Y?&wZUw$7z{j۵?_˴/+ִSu&^AO'Eg޾Pڌ0YP+03Li R1nAqpՠQU  #Kӆj"g c"G{Y_o[d[\^\3HE'P;̎)s:zNq/A/N@{mdZH/uwMޖ_3N^uw[+EVu˚e)[x W S;?x;a̗кElцM_0O$c7\SBC"5‚T3`SL]xraD"/=s+ɋ7>bJsb733MR7eܔ:oO^W[y5pyՌyrZ0buYw2ِg\0142 ƵMqF;w*ӥz{v=UO(:pbĻ)fO-d^:U5K o+WŧNo罏g րsOl݅o^yWe'z;/㨺΁q 4N9/d263 P _TW)7B&ܴFs-=ZHZI  QqfkΛA.ii:3[UNE;]ydۥ<ؤC41ք Q2gȥ"'5^̝|Kx& IH\ aP33%-8"YZ ,F4pԉO9k>OsD=ݷ5,W{൙rn$]|#ۘY #e1)hr[+ɘhKD{(v'Ew'dw&ip݊󗬩$)Bk3 PBN|e>S&d`n|K/|B䠓NE?fW2 ''m|듷xvM©CW]愙v\fw#O?|R馂ݨl+Oc2E8RRQNu\6ySx܀kD me:tѥZ}mٴs9iΫM|>|r&.2}tg6l^7U&naǥzW-1VnS՛94 v c+"enxgs真SqMK~܊,U>\ŞWM?ZLqTa߃PjZtjz&犖܌/{^&ͲW#n,5.M?gP6QY$s5xԸU>$ 2|wLd/8vTnU^}M>oMgp U$IͶ"w7NWv[" !l9Hɥp,0S+CLf$ !!SF2rD*,ѨLE LCWWJc:u+{!d^-8\W9W 630?OPOT4Iu4Ts]F`)q]d5_9OΎ-Es}hXC;b5"TH㐉ڈ褤<(aD-\Ru9^^)Lyh\s=GKE?aЍ*1v١/RFumʏ2uXea,|\Mwِ!#!QT iEóq0D=8E/Kl ZZX1c8B,jgINytkOU $A9D*{ ."LjCdm2Y9^3b޺Rp;ְCGrOW$ajv癠Do[= mԬCC/)t>TfHfQ -DϥVj Q!,"tJTă@hBf*fq/ 4Q /MǏi0p9 -5Fm:,POJTq(ݺBHn)j'Fpo#h4A 5ƣh*=xj"D2XpAX!(+H Qb䬏M.7Ƴ0؅0b GauCt!>4%im)3Upr3GRJ%;S>;S)#_e.6HDqBr4 Iơ6Zpnkǒ$".wɊM~.)!WTP%SU|Uv<"ҲlJ^Y{$45=;ʢDX_N{e#l\n7#hiP=Nͩ-#guyl_Ouμ}LqeyL襯Ѧr{ܭfg%ӻ?>Z.]c*W4PFqM:x(;DF VC0F{95)CNrIn}U˓N$ZqJ8Cy %o`>xcmP" FT4Ґc+i)o}Y,׌3"#GjvLi `& \yϒ4dbF)9 Y1r6F8$1op駐MD Q0TĬ34@E/E)(,&g-EnD|z)?QgmT+I9x"$"aV#XD\˨ZŠ:?f»m+6FyKϴh\N2VTrfX;$or8T8Qm]$<L/:zuH}y;Ax3 wƀt8k˙JH8'\x=) h>Bg FDasO_\ѡ_1VLa3@7[bÈ&M`,k9 3L4ѧM_CD.#N(Ԡ;SHvzMN/rjGxz_NGcTWOHy+ypzBrR{M\NR _pB.Ibielv~ä/a"g6d2K8)g޲0~A[8| km#G_؞a63{`2|m!$[lI~e[qtSd5+X5OaxExYfuko|w>;^\x{p%9_|2<:tW={mYx]v (ֵַN5#7f GK/|<:XvpZ۪`[]>V>]:-q!aMI|JeE5pPl^T1ye ώi8~ᇷ߿|݇w.ۿw4z`D6['T{~i?{ӊj[6-AҴԋߥ]S״B}Մx9 $ο|7JqL/۠Ot31"lQWHlRqQE+oIe`4 F֭ Rw%nxZnnrE {D j98+orqIFl~vnX1oYAF@%K9:k'~z\r$G/ۨ˖/%9*Qi+fa"|o=vL躛V9d;z-1(+'yːƹفW2$crsV߄ )jc+W~VVzeVo%iou&,*|z%,9`)ДcHXA̭}&,q,ӹ@С@ЇZ*|Sc̎{l3xCP"Y8)j ʄ".yYdVڒdKHHyL4ԝ2Z$I_5qyu6>Ƌ5+ V9"x6F'o2zmS[vcQ"~aFiTTK.5;mNIi>$j,*40L4 $Vr4h z{1B},kRA9PY@g'0>8]a)|^u)oNSYK$ku*9 Κ`p"oh]uC:6~] sm]^ i9P O0FCh\l> zJyF)`VJ.[dKO;>uxdmr^K T ͘Y"s*ڃ%meDy^p sEe@}!(ST1INnƝS{ Tg,R3zw(wA6z<ޯ{"nIɧϧ/ɱgGĎ4fӨfؾ#G dHũLMmjJ=$F(M''Cmw< m=M,ʲ<ypW1-7t%J2dD͋ 1}J(hu %L( Q tV%gJ̅3+A*\e^\MW'l2rNJ]\Z\0Sge g%m]OlˌN+> ޟ#?g;C`Y@YQAl|e{hUDW1 rѡwGGA2æ >/H6%,ETũHby#$[ CRDbf'Q›(OBrJB0/Hk͢+ a -%pBΚeOSbxVTZ)ھ^",POkg){ѓaW<sFjE]Z  %tALKiX*VFEL`VQВp,lةKё"D ˇE-<9hhQAc.W^38O.E=? KƣWWZ ΑhD30N/sI,hmQJE&g'5btoad 0gwK|嫰~SO?9$0.rriaVy85yir~ZÒ=Df長.Wp4m5;Z׋<! o ~7c5~4n_H~RR844b;N2_C[1/4N=A <:hydt.lD1dϱ ;Rx`hq2ZrD5RjXP躬*t,,J%6G&vH@.=ܞCC(Ӧ>5G|yT4T.)3QLa_B?=?ۯǟ^friiDЎ7\딶q1ƺ]dc*g[]r題޲LsD8>}}7[ݍαE~>y>_1eHч@ƃB.2KeN]9 e]ۍo{h^jrw34x2N-)*}vDtΔ6 鹒Ha1Dxr2f%ФYN0g7E$}ּ|JEU Bj @h)X TK*G6yOr'e^mN̳`6Nz :ri4\nmkRli׷9Y뉽ʭښU^s箊;D'P&Lf%,J, edʀ00 ?\iu*2(s IR\`ڦH䬼d!ZL U7gj#X(xe,t=>z8ja`~ݰߞzz1 ;^'''+GluD6P80'Y%lCQz_XJٸ(H4 [n 2SX?6Ѩ`T%cP39bfBZlG0keWPx(jz z;)#`cxBOM1[͢8ڑ4##d F9d 1 :ޕh8G<уw9 [+DY0T9ڞIٮ+yWWƂw[,6ܔ\oW3Y(Qd2*:{(HZ+X3)%3LFW${`᪈ءUVU^ \YҪ+ su0pEJHZ-Eኤp !$yR lI:\ˁ+7ſػn$W~tk5$ 0 l2`AIǰZr$;-J-eʒ'9QU.g`烮\Φ 7Skɫ+]uev|PޛqZ:XRbkkx⧥ٱ|Wt̑|4oן~Ws/gI`B ̿|tO?o;j0$ SyD50g"r1浍3OF|2sQ̩^Zdv iev/L.e1ESs! +~ ?9L_~񒾄jaWo] tU(?L&NX<œp4^ýL݄ŕrg ֚t7 J\SLH+Jyd\ӧrMwNI4ڣ.&gs^4WGI,1&rٲ!PIB8AQDLVi*&4Q!D]Fs]RJ>~vy |1T3/<o|axO"qϺW?7 /5+TkU>{^ Qծd~iC7ep: 5QkFwJMUUπT8HR>^р\hI HG6Cwu!Y'z+/>YOſmVsĤx߿EZox0`UR:DU(:T|VȘˬR!JR޸CRT3ZJJ%FQXSQXdK#\NVZKit(Zkfl׌QQta3Sl ׅՅ+XG&<`~sZ4yz0 &74ld@&M$c6B(S` -MLƫ_MMeYN0X@ch "5ٮhS͸c_ mbv`pKNb -dF d&뚔Y2pdBL$AlTgZjFv}83b<،?ՈFTF5b e^ h%XL4bĒ`NebTگ9l;kW}/i@(+<48;힗cyD} qɴ#M9&5zG5*5W=t䗗T1RsZw?.[Uyv(+biU'VUR!KeЃWHFW1)E B;,"@S3LγBBBUsAkKϞY6ы]xGs"<ɈR݄jm~t}굟g2J5I&u<ѳ|r'[z?fcp7.qYW?6ּ0<֔R:5|ZRaYn+HD%m@3>_讎ot4BM!N lwBHtftX$R}vmSk:Oŵl(6jBFE묅kB Ek6)4DʓMoQǔCQɡ`]'aM(v@xSQxp3 i&-B޷٧VhS%5W >\LZ*5ƙ9b!GSK'@EJSk$i Y3 >`Y |A:Phڌ$z~j5ýrd9>mSMvEoYɲX~&2qzaJZYEL:eRE%QHkw-ǽ]#g̺[…]nkȬZ,۲WkzǁzF3zj2]p̹Z5!bhQB TgD8]O\$CRBRZEBRAx'5<+dJY,GUTK>PjP,chL ڤH۠E"'Jc_3r{z} sM?9泚|{3K5qצ_EI",7ŭw ǫ«Ƽji[u6E*ΥHSOHSiE"MXyIuNsKi(Pة} ]HJdޞ>>O %@FMySk<-{d ("-uJ[)(; [vIB e:I!ۦpёi-XmK>+1fQp5Jё-NjՌZ=!W17_J^ۨzE)XYzGk1)?h\ĹU4./Qi%pC6{G%ۂNyʩHOKn3Ռk^>I*)<%-J@|bt lQ2T},(TxY%( , Ɍ׵tT @(Kx-|9;omJT9;L:%1۳֌QGo,"9%N:Wi]YVe JdtA%o9Y /,1Ћ@KVwGY# CPBhZH5b0.H>0Ԯ#(X{ jtIt/?<^M׻jIf@T@A)X.-7ր0󾑴G&GW{*IޜJf6Hq̓MPkjXgBc'N;A,;zPwg>Y9;e0xCɡ, Eҭy+rv,~'j^5e7g9@ʢM:hzJMe a"&Pm/:_hp;e)Ի*4W&g38gT!4A3Jц;6FY"r`AV*NN㌡j6Lyhx4º"7c^-\6a>pߩ5IiC/r(?:_kkElC2&%@JcP(XN(+©leb!p }|kmH_e6e~?=8w1~J(!)aUσ(ICFH(k]S]zpn²~V 1ׂc9gE }il!|}.E&EfEhTa@d fJrQ=MVIGstR)OV/yUQ>fKdMR; ԁdB._HA)L&o'Sǡ^[  ފN[lB\:$UXM0!=w^J.484>d՝\ld QRYW_!NOե\*U7:v4?8&9ß7W\s(U ;OT_;onՅ>132'| bwqY,]0 ~^ljapj5ꑮ!p8"|e`X(4i'DOc{u'X~ȦQU2uCBñK]L@z,YO1Άe:%)VvJu*'~/; ۙ.ۏ߽~w1Qo7 f`\lDe;n7inkh*Ьͧ&|qE0#blg-n{7˷1hq{f#" 3𙯿d2.{U]Tc*s׾! `@]]:(iVG%%%p%0c0Jo4 g8?McDf gexPQC{Njm~l&?V @+a$,r)Z6 !`P-Ӱ\^So -i2` YFE)#"5bqPQy!Q)4Xw!A+_c^oC'/z+BۄÓ-y`lYSDJr86w Ke:b)K8@R.PC+G"2`C&11 YIB n+a䐤2 TjXyA)ıH1WOpE"g3իѰs _I/IHx΀nTХEDq:/b`"SһAr8+=O0r2~IU1E{C\:l>[PMU0p.uXEMԚ) )d4&jci-ўGC;zk J*Ж40aqϙǎc  ;4,'艼*5r۫KB7=: a _PBςwş0@<`rS< (>/X!D~ L}\lXFQܒhԁa_ x>b}J@ۭJ rH# be}0z)#"b1L\)c"-8m)6~i^]Ѵnސ886۴ʦOw__ӿ[+"q;G)F2+B0+\HSvE52)´hIHˇWæ G0kRF|ш3 ,9U`hñSH 2RX\X ,Y 2D')")pX$!,s%0liY0F&4!I}@P +4$.Kiݭb}7i7Swj55_V3Wowq~)uF5QQQJpP𓯦XBs,!ENl'N\`aҩ%^ @vI ͐8j™T>l D`RwpDQ~Clv/dhSXeQ, "XgŖ}YAKݽDlY:Wz{m33cgmI 9Zr;G0Z0;>B xMN 8v{]ؔZ(ڜ-rm|̓czr>>)?/.[Hǫp@{,hd%DKm]:WWvXiWRvp'ߘr9{ E>1 8JKM$C0!B*aӵ S띱Vc&ye4zl5:XĈݔ[#gdl>< LǷ'FC2FcRKOI>ߢ ee/Vg? ]W]GӢ*rn \"r-ieD]>2ɫus˕aw}mZev$m{vu{y6BZp0ߊy@ճ:&\ Q{_x55fY?~tZ~ͱ-q.*-5.b]x`ts)]>d׼JM(CSbEr-Tș4בKHe[;ݽ;@uw%JJ YGZ`( X`+8!֦ QYaG! bBڬDeՌ% k4jxss3ql]}d;W>]fT0!J !Bh$́lؑ܋:կ EHStQfI9@8@&j;@&*5 _$fLT+FӴOr19x+^rÜ9Q [cI"|ZP-WV͊v2 d[xZBCPTK8/c̰3ב),h.ppB)PʍڄTm[A, ƀ4Z)"VD hl;\klWP>xS_=ŵQk[ um&ijϤqla?ԎKy s-c1 %.Z@C<y,![nu '1beP FR7J1FeAIUpo#c}l ZwY.{U yk ZGK!TJ&3jg>sio NVFŚ@QS; h1RΎXoa!6/ Yv} l FYv0Ʌ.i^}n~\  n^UVzncMæz 7 85Wuoޓ{2u `/q4wwfud mwP( r7oOwL~i `NJj]1&hr 9Fɢs"u=j(ZP0HI,J>&8'. DbBaq^Q<#pEu4Dgzȍ_KX>lA^0dnHڿ{feF#%lɞnlvUb>!0. qb*+)ɒN+.ԁ~ .oezXdZz/a :jѕ28sK-sK]1#^}! (|u`7)0"7*Hc>kL #j +J*jb,l)6^],)4& L73[%\Zh#,#z!#,U+ im@)hp S,E9#Я9t3CdDe="' Xi};=>8n"UycZ )UgII)1fc=;Ud.%<֊(/"d(ϵ4mH-iXEZ|wx%-z{D}֟i*''npU7zu7 xv,*>:m`Ŷ|YLlyWxU|3ִiߏO{vX"{ ;dzkVl֨^רYc<9qӿӏ??~=_߽d!!q/wmaE&MC>|7+E};| do;Qd/OVj6.Y :*;e+2lQ [G- YPYVl0R% F~T-!(Y3`%8a:C nзpN&˖57ӫ'Q0+޻B\5ķ-OM DRT6A.K&5e{42!B2ʪ"LAVC)pE8˪ԣg *gV0 Rn1S mEըPuRtўs֫m;uL(l%G%T슳ѣ8a:X':E:vʳ1thau=4_(ݞ~ 拹sYckk)Zā䭳1cb"f"gc"ɲHXriN{ՊѪh8 (0aͦAOVeⅇAM᷈,8ϟ;`9ys%Nn.'%(r{?Z;:2:DdQW%׆kkk\rmG.oKN;zxgQ8~엔>|7˅UV,] 5uLRAuݞ6-/.WQg?WX!n<쁒"B\Ҝ k7J<"0RJac3liqmu2{kB S5WkZla DUŁiBǗ~Q&W;! 8 3}=N~c{ /{V~y4Gzu(~tT9; }uT]-;pm$Ɣ(ؠm̑)" | \%5jĉvkn 0: `0<]֗`X3-Q;6'ͳ%cr.r=')+Fr5 CUoU= ꀸpm֖Xz+Σ^siԛ{/t1 zK4-UU%''V3ġPt ]F1+ļqysN_}RkeQhIs*Qwk8R͹(Ċ@-wR6w9]AWnBMM9/E>5V]@, ]IhQAo@آas mTĘ]s*,H=Gl2z#@(H8\j ZVoZeHCPjCcq܍BwW,6GlkLG ?|^IW|rSz5ob 4  ^@.~tTfG3~iwzsE8i9cu1y$QQ睔oP/%Q,& H*p)A#D A1`&($Qxd$'ՠ]Gq<&S mQ4E+k(!8i+HMJׂ >r 8eBsUm_R}i}x$[X" VA{۸D# 91FYq=lNNbKI i!(GnjF-iL81*'Z2Ü˄_.g忷l٤S%Ŋm܁nQ/%8qc( Dh,@{@&6*b"AEJOuۀDZmXX\/[ y͏PZ ?tEeG^1r, GV bw;o~}oz+,ۤ\gOc99{Wxct8UCobi5z]8֔(k3J5~̫4ȫ~9 ;Aj0/U?ώI; ?6E/v'\<} }v`7q~}ݤU|NnRiP/*(YMT@2 X8:wb*J~ hEÅO&ϪX}??knͅccNG ?tc4so^QѿA ߤ*wS.axi^=\tީVj`{@ዩc&qy7L1ڊqp~GCvb&x^f?废;ߏG CaN+[˺ekٹ"hQu6ե͚ 3AȪꚄ\^ժjj^YPjׯ0 [0b^vû:;WѺSbeuj];uCTCr="z"cZP~([Pca^}KӇ_z?~8̜۫Glb\,ϻ `vAxvmjoZ57bEo=iMUMz3ՄfR -ގ C{;vӝYq' ׽XWPlTPwgXLsURT9 W b;j]Ǽ kH;2*)ީp8%g_{6ƾI3I0 x$1+#1ˠ-vb /=Kzl/猍ƧHY8%ʓOf_!w )3(IsLЎ ͻi2'%x'$O$x$8qSF`S0D |R݄jdmxߪ{/peޏݚP9yɊ y`Dly]r$!-b xa<{` d(iC-7Hq^rD!Y" <%$ 'C^{$7RT8 $%hWKpmQi` 4`szb, 20kL tNWn'_wzYN MJ4sۘ]Ӽ_SDy;AjK!i5p4Ę2 $h!R;Ѫ <@61IKZIv^>[P/Ua5*qRI.5)ϼaZaL'"1\ K2R =[ա}:Jd e9ǢbE&N I),cee!#HJccZе/KlpbPBˁ e9k '/jφRʋQ)vHuƔBYz0JHa$hNY2kYs+[-\nSNk;vIƇ P:gBkNI@'Hg`Z 4L@lezaqgVQ}ZNe2*;Iu=K7 \}ĦxsKh.ٰO8eؑT3?ʹIYƪol qJcGw2my_Zy7O ]I#/ƒrIRiastE-qE Bˇæk;h,!1NUH"qILd6qGV^P*G@H\AC"FÛj͙D`y=87 .1q6XBBΊy-w!2pYH+|U؁%?&kjn9`K>x2E:rVR+j֒kCXZZDP;U0<'Qكht݅n.9gp0r|ZZĜR;\AD2 "w GFRT u̇Ri:ZWʺH-V cB=5=Ă dC|  <)O㠍\r=G0^fFZ`1$drAA#+c} M=ҧ{/3vH) 5~dKg:v88"i]:"y5/S~5 3xg&rޑhDY.: ybt\:'L YlXC!+hRskGef;yb Gduk*I#<s(uq!$nل2K񥕗5'ìRԃ4ɠ Yu*QT4znBtuq߃~<#&j`v~> 7g|F:/l>gts *C`S3"R"[@Ou\06Hspܦ4[S>w?<]c^b"dHUfΈTEeHX!llo& (.No){!f {Nl8w_rCxϳo'_jv[NF2$1b"fs)UewD(Sas+[UE9Uֹ>mBJVTF`-l GE"e$Sb."ZzM_pv3vNkPXq,k~Y{9 4n vC'eo?֠r"d]wKad !K"%,sM"#s!Dd:[{ùQ ث8Fˈ3#ƈ`k,]w(̠w:]j2bN%+^a4q1kӓ!|d8kT|{cydYFPy&&*sbKZUvb8B@2Qb $Y Pk1qm'@wbԕDTtb:&' ƩR1 dH(6~>[H׉H`nn`8_4YxKʿ mQcn}Bst R5"0"nqve+viTd]ӕGuՕ77[tw%eߣ`x./O7?RkŮjX6p?׶$&rkils-@gKyfKU#rW;*t$R- xݛ>T۽ZBYIl h)CYKT0DwU *Ւb. E )%dgީ . m|Գ=mi𱻝mH;> g-dwǞCS}XWgH?qe4&_l5,a]d GK #[/\T3!B2蝧2t%{V0nS,LvE,OdYTv?؃QXyZ*SJ;t5ѩ/)JDیQZюtV]T+b# +b%B|(3('I,z.5\jDRY u1RBH Ab)QЁQb*B[6,4$֑166ikû="bqyis۩x9A΍y`4u7+\+Rw*CYQJnaMVZlvv3+2;T=9=4\`钱fv2`0 Tlvx{U;]4i ^q ZM.GcI*!z%Lt)U%5x?^{+pJwow5ϷUlY?|>G;ez"K92_^.1gk\JG;JRuZT vxTEĞ)(,AEY`Ѥ>ǔ1 !%iyA19k *<(rYˊ<H\&S HУN! gݮ|ky|",M qj%WR}(>xi{CJ4zjP-F€0YIpe%U8JE+hL r Xʵ%ՁUE66:rmp3"I.UE+RQ*jtutEZ3&c۵ͦC WX誢E:]UCWvWoA#_s/*myUe|}/kn«';B|ku_nqj9ᗳ:@lHX~V <9Mبap=-Mʼn|vJ{MFGzRjͦrw5󲘿NUȷ K~NoA-Y8.<zqs޿a0f1{.5 kYoױ;>痋ǫ[e/W=v>cR~+xҊͶ X(: cm+Z=ٶ4m=VFDW  ]Uʌ*ZNWFWgHW84fDtŀ/U4h骢tZ6Vd Vn<¥Ѩ+F@C]#]iXꋮ*\PcMקUEU3+c-1 ؍^hAS ^Q* :6݈ BW1XQitut\?"b hUV骢4M]#]`DtŀIgen,tUZ;t(lӕջ6_[ sfMJ}r> nBih0r`v KmFٳXS@13~Y1wZ4NvZy k-LO'Yۖg~Kd5"PԑEי$z}<-Ӓ#0HB:9 NCeg( %iIJWE ]UzlEiT3+ƬlSSuq?o?U\VhAtv(Fѱؐ囋 VDl^f;.@#,T"FPu,?*,«@vCXMU>}pi10㗺L?0릷>ayoW~y|ɏ }HX „0a+L „0a+L tw`r-y+L „0a+L „0a+L K:bvMdO5$1sQ`He ٢b J,lJcq&`S>8&7tWW3ϓtp^µvSd۔։Y3p h*\7(hO搾 C>-|Sn6#-ʳq]m:1ϋCCd\yB@(ɩ3N4/b&Ƙ2)tF{_(e2FI"+f]LLK%L L W 0Bf2/&Ά+9~h'}[.-.cЏEYijOCb< ںؖmRwLő7?we]3rEYXwBdvl͵cLP<;D "]q 0<0f_bD))*AUD" >h. e`h\I둌*0p4@"X,T2 yRh2NI]6rMHQ}8,;AV+3rx' 2(kdbDC(1dQ;@E0jV{dޤKlvSQ82Nڞ]?R /d+(dbr /0&%x-G;Nf-2v#bwN&)%)gma:ٱL{$d>A^$Z!|ln;5Nf=BsSXR/C!Khjz_&>[s3j63Kd&nޢAXтp;lo`ݩKё"D m|SixfE5q d!8g49%V3DkFB0J'VቓbX*-UؠgUݖq-)XZYMTAJF ڡ˩^$e*W4q3#oiZ*тq* (q(%.zƼVXVQ\ѿk-+%zZ0tƦĒЋUSmU69 C!C`Ad3Brϕ1CK1^`%XN 6! ݙG=$c .:iȶ+[A'eb08߽5i&Q_~HE@BlL&VMp/ZV @12dclQAFK'1N[j0fxŃ֮Of٨O LLD3( ::S"$i$el4`O9pSѕ]3Rϻb <:>\Zn1#&%ۊڃKkQ"i!qN䒏7 Hگj.4 5(2m[q;[*!@FbV1BqX$"`o%YҙޘyYqIȘރT;e1M&v)*̺o¬Af|HRg6R%2"C0"RP2+O1D-lgHu iw፵OgVg?(^ҟ }mͅVm9Kxb#sYY9ioƠ7ƌc$,9WbOQ1=_^UI?Ww]WէLZ~#e>$S{ɲ6ɽ#w,O1-;8FJ. L&x)|3 àH201 &j :_\c7]Ѐh^E|?^Ώ zCҮ~‹Yhd>J4o] FƐ3?MQ}0s 4Wj2i8)GWTe?n<Ѽuoּ,3үrݯ+ oϧ']'gw1\+aNaOf#+^[("Et^|x$ /v7e==)}O˺˻vi7Ua(2e& \?OW=QK{lu\g^'+.,tI}>C+ʊ!? -T!frg'?z#.ѻ;-:"pL-¯=B;]6ZZ]SStj>|7|%c>jBl>\άIodV~ M9~!]Bbr-Z~K*5B# X.}ٍ{P+}\a_BzS jF98+oiれY6?Z1笢 FJ&ysR/Karht79N]C({8`ou_>T3RhD5pM adD7Z]"#G\DF V̕CIh90$G=lrhwA,IUp<%@J&T:pA+ZGr6ϊ1#C0{dX&sԕY Oz8>+Oo_L3IZ Qg:cS]>}kB0hK0 Ώp3OkQ{ޠX`Y((:ٱub *< Ҍ b#z-絏=+Ҧ_>&5-,sq2&1]_"\CJєDJKC7 Vc mMx]ZVKuɮv'/>fCA@ޕF,dJ"oADx2f%WϤ,*f(iܐQd姄AT sVFAe_8?Jd?a|_kɶ%ܞdNmh’You9Ai[mqu\{\dKܽRy{# F&Lf>KBfDz2e01G2"+MwޒTEFlN)Ȝ4 %=)(RrVb2ܐYrB&YYʳ`a5 ,9=/kK.GM.&0LAїA2q ϝ%I6(P,gs^{4dRZIѐ|}'3u QHQ`S FE>Ls> Qy3jg=b0]s.iǶ-ꢶjw v{ÅYL{H xl=)Zsx\L̐(GX]'d` #T,Ĕub@N5P;]5qa/J`D"CWi|(r ,Ƭ3|rI( R)%"OgGQ+IKeI31EMɤ/Ɖ̈́ehʟ{ThcRbP{~ESy;i~Wo^&o^L癆Ob<"Kٙ>5 {VE(tBRqJvJf$J/@hY;;2R^B^uXE!UA#aB"]BMoƔ-o;QrX1kLGsQ sz׽g{_Eڛ0|<5u{er|r&_N=||)x_ͩGz\o9$f>L͒23RB_BpOVs#Y/dUw?xo ,XC$X"e?IQP"bFvWtUu,dB 4P r' N^9zF*Xg?δ<@!5q&'Rf6TVGMLݫⶂηd3cM̳LB6 v}çN)BuHv\A_ITZ:o$r{dѢW?|7v?u{_ l7y~6J#Sx'9݉|[JdAM@図s3ixf^ܫ՗>Tʠ$2m+Ҝ4[( ίu`k6BCFz- FAB$m1PBxZg&Qih#>>iï6 6m4=څWo["ON \]7]ZdD `Y*z񗧫˛htu:٬'+zrz˦^˷?(e\z%.pp;UJegs G.U65b9UF,Ѝ})֋I}mL>KPj3JB GkR?6+\AUmJ 3H묉,zr{^uyk *%zϸ׊eJA W)+ś=!)2j Jz\ѠTBg.P G:+@e ~].znvIߠ5܇Wg_FDv=ja4;ofa|OꐁO?wR6<:!eEIbBdӎlS£E(쵍32drp@[?qr6!gf6Q *tևyWW9.CUr^jϣYjӹB{s l_5j5B\ 3z T&sI+hSH5Z[J9Iz*&9,UȸH+Ƕ2نFٸ#F]|g{Y"׷A/ ܯfJmѴMW Xao@=ŌԶqW ]'ganNԠ;h#}H9W(h2ˮ(d,z^m';w-Z3LkѲL%Ǭْ˥cB h-Jǵ["sQf'0rk)}0pPgGDN9 XZ憭~o$w'zƾ55ӷXa,cSbКȆ˳!1-^}&**|8[ԙIT"^+URJ_)BsU* PEߗ9rv!M6ǵIcΕ<0Eqdz(A F ้I8o}kl0`}f1c|~~sV©vͶEZυP"Xr7< BBR~P=x(%l>\w? TeW\rQ)B, \k F/=[| ;]uhMh' Ucԍuevx>/03֒0 RVj $t &l L:dR1f5`u0+J T&NIo1(דּ^JHti@Å YT a#głPu3ޘ|](V>!Lj_wjp#z ^L6IpD@utoy!e1?UEb1Yi+INp~/Qw8(u $ &˕SRsC1B >=U\AcvW:Ŭ, au3 ,QzϘɈA*m2 kWj,˿kWSmM0NL(!')!"7,g P21zAeEn'y3 04D?m:=??{"4>;Cc0#suPl"'i}U)DC C4_+LϫFU'y<$S;wbS:ԁbMg&%;81ZJ.ÁӅ?9H ^sF%hm倞X.1L`Ʀj=4p^"gmί׃хipv͕%};J)C-׃گwt꺓=O7E==)Ţnnyv3T2fb{Do<8,V {lyA.uX3/n41GN{';Łwx7kޑbFŲF5?u;>Ͽ+} {|]o4| RmL¿6#n@9cokoܵjUM*&9o7Q.Z3;\{ۍ?C...KG.>=s uc'Bd_LқQ5UC8%EE% Xd3z/6fHXqA(| svF0J`'xc쇬 l$,ABc럖I'V94 e8ӞL*Bt'e{nIV۾ud,E D98Z KaZW<3>INV+faIGlR'T? /7\ F[3YxgwyeHbZ뺢v 0VΠK ؆>Ox(1Ux[P5jp"xH'W#[%+ܘgһvښ9m-/"zf΃L)RȝSRei7ϙ@&e MDNҾٴeNqFE5a iY)YˀNϬ  yիYѸ=@{gNg!: H $/œ$E)/[(grP Z,D!A;fhWdJX᝱1ژx |65UQ7:$X. -~SdVX(Rt"7#4`G{J*];zh 2b mrI]U۽úP%d$TL/86.`{r ֿJ{CBLìGHtWhIq^X=R(-gqBS\ 7g.uraߧ8x}**vIWEe kώ)$T2ONj[ON?!⩯0+ : IHAIjl9]Kf^(REBΤۦWF%3:QbhR&(:rEŘI2:ؘ7giWSn-"=W eWo$^qt+ں\lZ* ъ#q~( 5FXJņPyysLP&b$#Q4ЁR {_bESlH:U10"X!X4. HMAzSCSY Q(ʘʲrjF0PwI$-fU#[n&[h)7Po(9V_3y_J+nCAb{7U7KW <ˆ|Y"3Z1XB>?P2$!W2k4:Cгz:5 (A*1Yo=` @}Q9LZ%V`#Otwetp0rzg w(>DCed gsJV>G]ZLO~Tb?ZJv (RڲHu!d#ī// Zsa|$ N(%c"Ie,Yй3SV0~q\AWcdZ Gvq+ξ)/oh3|\gGiUDxx2~8# p?p{l26{^|Yì`"bHUPȆZ=Ֆvtm~ʻ$.*o[n"]k֩ZV =![&yx}(bI[|AE@E \.Jkg:Yj6Z}oM-C(PPȕu@Ef5`TP# zvn%zЊ)tVbIM1J%,Niuq6DV. d0>)b) CT#B D>8H 5gO=;5fie2L$t"4)H^ Ut$*iB,КSk)dIgCW*%P/<=diwVꈚBa)zš;3TwKW΄NQ-_v;IϘ)G|T</ 3 K1XgUP#Mx/`㤎͉7R)n3ɀEt !),tƲuXԣl̹J~s_lh%=;k6EUX%Sw!02"Ҳv2"ru@=` Dvhk26Sюf3P툍#guaE<_ c:vrH&/l5Cd,:\sWO«ioTa?\n˻Im˻_> ?ۓ"]_r1wʍ& 8_&ǕviR iz1Jf]!gJ}K/7zlyOK`+TWbK#4tC涋5bU^]&gS^:*6,s|ۿl+bE];l? a1P0qۻK:43gxe]z=¸<=[{'zf*ZCt25dpa0zHZPhs} l!lW Ć2J%Uobm _]>0&Nk[(0i崾Q#ɾ7Q3|B&^g7(. Юdw"R-b<6QޥF7-fpw6`-{oǼvܾɓBZ4|]b+} Z@v^HVZɟ\S88(i,((e ILjPr9#A&A4;hRimu%lc1S1@d !`Q)y<.9b3qvsDFB[|B@xڒk6-f^o_I\yZ/}) QLmO1?}_1Ca6LcԀu: *( 0~bؘ4[ԋS7U7t\jՏWG0ނ LDHAF)#ĤܐTE}d %AN cKmf,$n7띷0e|M? A\ϯŠ0FW 4H ('0E2GٳrSkQt%"4me+(H&x ds^llM6gqi2e.&0vZޫɡ9‰Ļ߮.6A29׷=7莳џ/L<4sc_K9ЦG`Z"j!`EVQLҹJND%(1.]|M_r.)`0`>TʡRZA526g72ng)/fƁXdc, ^an8_\w"̾p~9<\^^}8b i32 e9كA(2sm$PTRjs]!Vn[]]T?|V6Lr&{/l2LtH+qv#v婠vq(jdG`22;K"c* 2!%m|0:;mXSҰ̐)$k2 tT@đ1?٨&cRG~WT8McD;"n +/= P$bSبZ0у6j`)lk `mcFg,Ařߗ\Mulj-ief5ƈL=?ĸ8[U3;fP\l\Bَx!'Oxُ9c({ dk*S0^_ޑ1{+1(R"[3` ,Eevȧd8klQz+b.$ vi |=;@gs0#':FgL>'gsnWů Ffo 8߽ڰ/L>NN6P*r=窨۹*|a~:Ί3^' NxQ;o,B.RfγDL?Es$n⇤2c .v{0W@5㿥1%6BtJ92aNRM7<**-.7?x }V*\fwt\Fw!64`]^?O,R9ư[z|zX=du݅W!Ptn ~72=h}>L=ggM{K=8}M[u3;O%zjt߄|q9">GgxSrNDcT{%v |a ϗ7Wi>i` 3!@C&b2@YAGݻ~{ MId1rmԔLΈ\0dR)ՃZICցRtvFyc Eː2AGm+REŅI,2@L=miW;%=zWŗ{|a>4Bi<>?,ϚU~Ar"P2Ra⋪1N,6͎0CB6Ih-)$5At1E D2*RuX A!٭7Wګկn9n)f[/dY*rco?ԷS~O}o?/f33Uvډ%OwRJթWR٫|UƆҷVcjN*I;t'ҝTJwRN*}#`ZF3=͇sT<},bZ0xr`.1o#ӃiכsP=Pۧ]uJoWbۙ 1-GcZya.y\h:ˉm"LT)HCQgr(.OX70ȍ'k9q$BܗM7RWW׹H=,pXjD*$e[W=$%J")JI=F,GawMOĸlr' 0-|֏fJ(0G!]H *J7QGkW - }Uیǚy/!(:堭1NY02LZNyFb}0m "L Emҵ<ym ^o]ߗbVZ;X c?QacZzh8M.wBsfY`ZJiPg9bDw(%ug{SR>hŞ(%A, }9TS}\CeqML( ],AmO'ػEfNtSa^]$l*2HtF݈=B}X:e CQXBAAEhA>c쿥d1YNKxJѝx\ˆ qCw,[ OaMdsiolnsO.K7枴@f,dJ6qhTu VZ|1U+F})UǙZ-ܾWTʾXuj:ύ[&քU^|98cQV FcFrr׳}&whۦfӧ= q_z|y^vF#AW.&+$S|. ` ZDPPY^8bpErȪ`BWXfgl%07k#s6o2yƛ|e+j|މ\Q&F^YĀE'LYz2$Wx?hPUh_IZCZ%)۪dubtX ,dPW ]j/< UOЈh-89zMXwa 1)*aI LeQ[Ow*bN6U0{^Rev$CČB 1Y*#K FF+J 0eCԭ =dwxmFjf3U763_\@8j#|xi;9`VQ .|WA҇zS??(ttY^i]%0r<+:Pwd Ok(KcwFӅ:<8G 0i<8X Љoז:#((歒|HxM:ZO1/ =*=:z_FJ>w8dʃ0ѧvk] BJ+x.ᇫ㋏Üǯϸ rOϼb qζMM_Ao}7Dچ~_|v*8M%9 r2<:,] 1~;^ҞG%Ħ晔^?iiKN*oY 3ƭX-xtr\՜lw/*Yi֍AY?-%N}~OR|ګ !`y׈2C;L w? i_,6G~e35"^QWlp:H!Z~AinUc0)._y]K #]xU0ՌKo*em ~wEEMf: P'0=u<>+S_|N]yBp>X뙍Ekj+TJ3Di l/;uOY};ոU+4;_K ua+m))~@l SOS)JPlBx+9^񐭨/>Cפ5O}P%||Ppe`A[6(Ԓ cĺ} ,Xf*=Xe:_%YD) %!(ּ񆝡hel#5Geb(>" !h=K? d"焆SFt,]s7[WgaӼ U0 (S`6`BP !0E/TA.K6%e$%ؒfB$K2 3̌t2rvՎ[->[PoxHm]LJH5&etz"RN:[B(^REg!zA9|\kѷ.c,Df Y*t>U`9f/SU)'푌Mh_g}=v A[Q>B@Wk=}4vϧFY 6hR4>8(%tɅ&dd`1Rs;>[x}Pdʸkl - BE/N^ o"Ya&ۚ1t Wj`$$_b c@IL:[%茜HGMiKw}5y=uJ go$?vח~4=:ztGfYqBזw0sS"Fڧ`WIt=oWn6#M,ʳKn|ak=.%QB` 3E*Ds33RɈ.)i. Bl=*i HodPu*^5I18!5g(6~`hq)'ø.o2?͖m$}stKغVߚ>~ dzHLl /mIYyV|?4/]O|4Q4{oY)'&aይZ--|z*1tOdԭw@7>1Ii4}b7,⮻=y7]X̧s6qvk--}ńn+o% We<4XVG ~?:Nc^;03޵6v+E>r$E қ/mWw#ɻq#ɲײdY C rkYv>p:b@\9 Z$I̎˅hÊT]+XЌD焒&$lSmNh1`j*3LW9^oGx^A£ǪMbJ%P>tBAdU_iqʭYdBRJK-O/G5 P$=B1wGǓV^/9: 0ŋo=D.-y} *8^DOOb?)Fn@LDVi]Qi͜L|Re~HY+.n*ol픩o }Ъ$AF"(U>]4mnۉz$囥g7G Gߵ2nuvp|rOcj:u^0մ^VwS.@ h7 W^N'^BJ[xԚd6*X~R1˴m$zhh΄Щ[~}f͗ͬJؚ aP[mJ$qAY#іȹ WLOh:'*2jV[ͮWa}M}>PaFzY;юy9qC?fBa| cⴵ{¼ c+ɆMTgÍz绾`=;--=mB1f7i<;o(4G VH MԁhlF̗JJY&hM'KQ`f㦥l_G4t>z!Sf謈AdK@24 B̀a$CƖNu If`N)[WB/"(Z2JfcB|΢]VfJZ,a8mYc O G4O[ UpV ړg.&5h{jf˭R_**PxE άTV%ٲ,l1h eW Υ"3UQ[S!0< 2pAЃFL.1ٜ74Ė\I j#c5r#Zʋ`a' ^ ]Ohwf~UN7\wۋeήgqQ$4~4| gW)qks2Y":P-\  tFMV[* ,dz5'zMDM&jLFd2{̺,CtY4H-r#vӚqy,]mwڢ.j/@ڽa;ͅv~-7;H 68␴u0+m| VqI2iR$dQ 6jR RVE2x_[~WXeeD{D|hS*# 7y\qUUx KU)#8AFI#%Uƺ6126F%^ q! ΄轨QsHDEe!ʈX#-u \-svYg]/.V΂$T=.> %R6Kc‰AFT)k!Rx4xXkw싇2Y0 lK".v+#5Q>v.^q;˸C: JBkiN&J!SIBq P Y0(i遐Yrn c]-[i%5xn!l`v[E?37o~3N]DIQ  7V7 $N}ƭc/ ӏ$#UEaj"0H*q;Lwm{khw#Qˬ]̏S2"&-y5Noh:պcσ+->d\̋: Y{r">b0b0ߠQkyǂG)Qq,Bm> :w@Y5Dn؇,d_'^[ ZYdN7ΧܔT;>(4c9~B*! ` R ΜJHJǮ**+T 'WDdઐ+UcBU=\BI(͔<"rTWD+,^!\IJCV \r \j k7=\R/O+ )ʉOr}w6Pߝa{6{kVwO'rZؤ/A>%~:`L M10'm =Έ˫Q[4 >7?r<|L[ax$~i{13)pn#Χ8k0͝Q'$IAfNFr;y^5G/ N5sm4SՓjn7+ҔL٤dC2Й 0Ȭe1 4ؐ Jǔ~Ϩ%A3ڏeJJrax>mwަoc3ֳebR[asV}V'Mk1oK5 zzDxD({y{``]f>dR%l %B2&2xږ| 9U C DŽ!y#=mʒ~&EN?̪EZw"z,^<%Oxdlي0 c}} HA0򨸐@wy}g2O=ly61xB/<#Rݠ 6ZOq1P{䩂aCb&M2/j|m]a߹#}vMXmJwOh7=,Ǖ\_~8tU)['~N7;dsKGH4+6p_2zoeƲo⇣dw9e|҆i4ȼ,B[ײ]ځӁP#9nHX$QHbv\.4GVDJH_t!Eo/еzlxxtfL'B?'46!a"h(=@tZEEAlN0Ճ^+;Cp~uy,k=VeL[Ϋ^xXu~Q _採T*|Jy yDpZdabTx45]a,Šh mBT x&tAVy-cGD詂YesDw=SRBm&CGdhRsp"&/'2L)9Ƥ}I瞃. 1'('O K_]o9ەЂGdy/d/umX0 ؐL^)Nk_iWmx!(2~Cnu˙d+/UGEI;P9ʄK$0Ɍq 7.Yk"mF1eN=V# @=+@?%5$Y>i2Kt<1%mRm᝞eFЎqȭ4AhQE:Ffhy9;YWlyF-K;y0gĂ GiRD>Jf$'#i $o99xi7,P>}-; hE@l6XW,YOWU;nWZq>NZ1q_YQ8xOB Ą,yMRGgV Y4In$i] 6Q8jڞjުW{Rkl!XT`<EŵTdI@Ff"(2v|R/+mWSl/ -Xٗle[#{%O&3%;..zzB*1UB8ϝIBQ=VX'vȾ*s{W"X/\숚4m;K7:Dq;Y:$&/l"jVr@u!?{ϢƑ;RΞKA5~TK(R&)+V-DDÖfwuwUuUu= )+&h!T)#Jm=ʼ{Ń3/bWYft2q,aZiSd:2 uE8$! I3J6&GEME,+aRkhksqO63>gsSt)ZEy]l^zGcj^ޅ~tŶn}J‚\YuV#myi ,{X\D K9g!J)*$8 "tn(r-H3ٍm*0%OAZ R hTF"!!2Sjs /i}FV0enuY ϫFT߽^OW D|j-Q޲8Ձl".599׊sÉ wNCdW'бQ@Jz!ԁhUʲTpA 잟e^]>*bn2|Iئ8 gNѤo#gSQ?f4jݪg3#4RPx{3=x7+FQ\?i_T+teg Eho2#pvFj$z$!tjzW3* #t{%ҝv =پjMvtTF֏:QWRb'r+)$,}Wje)=^ŪNٺNoUgng߽{۳|뇳}<̜˻'\!<NV f ?~~hA͍fhQG ͸)w X*09YKSǛ?v wk893Q+ng+nlWZUZЪq{ .JvXnb@mI霧Zi^xXR]k4E- SE/%ywN33<7'P^_Uc$I0 x$1+#1ˠ-v?zAV꒦9c)(hh(Q-}Dp6;NuJb &4CM!%!KZ._6JwzU ^VKd0VKjj*Fx^M- @$Q8cj} ,{XR#Jd@+ WK $K\$4O xʐ,j<= 7eaGP$$5*X\[~"%)C$ 61BItY BXuH eYwZXФNxp7 LD"yK獂T}S% P?Dюh^L_bjK4_xU|&o^M¥h׿ `S}{ps.xӇ y$atQBGʚqֲW+K? k1=7e:j:o[ݶ[-dY< wypFn:׭Gߔ7yף7?wnm];Û~wP18@O߷d7UL,ƶ2I u~K;$~ݷ^Q!Zo/gX;]|PHK )χ !Ra1D+R SKo3|~K&3[[g.wge.ٲ௣usLٔd t$lDG+ZhOPXb344qZ0Pm UVul"oo#fmlV嵤moM\)7|j5f'C7^N*VNj6wpl)s6hQl..1D@G !(NBZ"Eb I,0,N6B>BlJNSC$,G',4P9gQ2wщnz tӉ%CYpt[Cl[dJOғVk۸%.mWm}7u/ `W+&_[3[4`Ss͉}P3j6R38(mDÈT&Rmҧŗ>D)HRpم!/Gґpz ,VVKInD"&AUI !#zj1!=hx[F 2"rbCGFXL0(SK^"gPch]F<ɬz%ЕnWWn?4KC d0O^}p'=JKLDkiWo~?OH[qNɹjuO%2[ L%( p8a5Sc >Ϡׅ"j\;/}brDkqn}v!t2* Y쭷rH/bk~7K }nQ$=~/n+[D6>~.Vf)B]TAmU0/--PkuaC IZϩ3P GgV[KUϰ3_9Ajdbo^Pcp<@Y"1Z(OJgN]\L(x c+ Xk)į ^'EB>$h&DGͥT-Ae UHp,i@t"^kbB[uE!~ӻəF|GTν.<)s=l(ߕ+r S]ؾՖL2@zeu|9l ƨ ;2Ez>D%b"x2$ c2"ER!NjZ929Z01qʐ@0H,:&%4DE%%dRus"Z) 6ؑ2Z3/ /|T^8(mxVxU/ɪw0-^Uu:N{09626Yj4򜨝"Qf3VZP&p粔o̭p@M> O84dE{Yf\頄W"jfNcU)@"A]g3nQ.k׆rmV/ n$oep:PNSjm",{Fd& H*PCFE" C.ŀr\8$R ڹaml懥R?|̌X8bm+G5sDpĆ#x R3Mu &g$$͡4%edLv\P"09Zzuzk(!>n Rp^i,d>qAjLh$*EV{䋃bD\d Kv勺f(ŵD 6`ȕ7`YtZdYiP$%s _| x4.ؕښM2.kp7UyFn>k\$-x H*uY ws3r+/3Q<#K)x Yvs{~քFvFD,#I yDML $Nږ eV1'8ÁHT ƹDlmN-8Gd=ly}>ųt.liUuNMyCjU:|cy>m ;]egiR)n?A!ёRߛKo F;꧴0U#C%ϦYW/CBH3džTEӇ4"2?DI=o&7\{]:ټ>~.-{z;/ZZud?T%~ͩos?霠]NzSvhZi]:{gai_y-F]4@V|5bR*ld|[}tȹH -ѠJ%(IeWI҂ BR"@e5lG\hZ^[^3ֈ%cDa=۝ЂiTGR}18 i`.oMkjTXI0r|@ZWUĽIӆ)M{_S;K7KZ6Bthm"JeLD9YD+VcfjtbeW(VQb 4r^x+{֐H'cQ5LIo4>r؜Mp-(s4I9i3$ˮEX"n($u?N\׍vBR7BRKYR)c, 8Xu9Aj$pv#\Y |0M3x.TttREN:mn ~SC>6sOfN,i?] %q&L'.QFXLy%WK_L\"d-8 OO u_&3Qxx~nXʬKs.S]ݠ"HPN"O:Evkc0ø5K,?*@UїoVeuwR;s<)கHlĮ$ʨ;ʶqtgsupz.$ &ME7a$M;FfكxaGjGt~$l<4W Ÿ/'$XSvAϗui|)z1_m4U 1ipW})\5;_@98p ߮nmK֖f-©\NP_˟tGjDaQ_͕jz9\Eru\${Z(ίu`͞їǨ7Em> 10c! n%zasol;Q5ު$S̠mx}I}7nlx&mzq7f h fD7ŒU"Z#Ԯ 3* oFGL!u|8P}hMQn.T'G˹khzSk.+M-dS{|8|4npQ5bHhځ6F !6ʀxt6}?ckAkA;fs=4tlzGZ=gL5At\5}h ce g~F4uӘ)|hf WJڜsA)ͫHH.}Eui[ԼOPr[J"aW\]huvUԲcWO] 2 m$/ҜϪ^=F3wkىpS!UH?>gʹއcY @ϩZ]Z&>{p> dEBjiۯ7F˦zMd1x%[K!\y= ĔNN|DqvWc;+CP Rb⏭ V$n+i3Y]H,361d@,@<$PF%j$3mRڑ6 '1$x%t+H/]5pΕ,E'Ũ~@H=U]xy]bHZmeel;jTt~-\} x- 5VV{{D]M4gMj?F 2\K騘ɂ. -p6KuȆ tƘh KEkOp)Ak ih$OwY/;&А@ ,ʶ[;_q5YzOBxrXFt$|0x@utmyCDb.U50uXA2Jfb3F=c&#!3e>2h,KE_K4CH:1.JQ0BOBCDyNsoX.$3!deb+n:Bj6}v OԚ$Y4?@XTJ6h*ljRGDNҸ~jrRNxI}z rR8jRL~t̞4ާ&'|_]3'lTX2{q(C3vpb ҩ9 >gV$VƒwflZ֓ڇ mH/O@DF0}ٙWhr>^@"?S&I}駫Nj^6O;0YW;;("Dz1nzt[O璱U33I|ULKG7D1 *u%Mp2՜MRj֕kE`x94J3 )+B?ϛЋq\!@5t&-GM &;@_~ǿ/O/H|G;0nJpAȭAv-1_^ښM-\gj檧μ%w +p)Y}7 %Ɨ5hsS.d椝X*An8 Md~zY0Lh5  a!4ԀU~&s|\n!ViQ_Ciآu(S0 X9f g73e hnn񪋱gTIb`"V`YFJ?iuـrb>Y 0Y(ٲ%Vѓ ND\Ψޑ޵40ݎWPd#KR(J&Od <79tĐlQtI5M9K]Br Sʭ7UV][tZvVoejya,% c}_GӪmu&j'1M_AoZbuT?w׳_=;z^!ԉ$}-05jY`|~|vRum^H'!HN)OVHMPୈіJ!1(6Eƈ_3^ncjq[iڵֶn,dmx嶬x A@IגeY;2@.yy A AsUU]:(ٓ9D"\ 9)dΜd{dPcvI`#&FXO!D`9dK@陕ZhKCvIkm<g~/|ăx>]Q)$Z SY'_+ydҋΆo/ QP39FU! &IIc !#Z(gl6lMIŶ.P{SisXfso9|rksy.yT-z&ծ=u#"% H䐵č\j  eQe COHw:!;O+.ho?DۺtT/wuGX[JQa%cy+վt)jg k$;ϐ~ڿɐwxZY`:./q*IktR2Ud]TҪd՗AhNvN{6 kou_Ezfm/\F,L+[9MD'6脐چX\N+)KdPrٻF$W}Y{gJhX^Lc={F-TT*JKdRej+^ˌC#k:P!f'BRU-f^>pB5Džn=g?e2{xηgqy<3RWxs ڥlѲ4nؑ!s IqhL=GF~!IcPͮDұʩT*p&$#n'Sa3fTlUX]zecP^/v)(<✦IU"$'z}ݔ}E_hx;Oήq^nױqNIm] %%nȨ@m#,_?%/2\?Ex +@k_a;&lwMޱN덪_8_{?_B`ʗUnmgݒھtK\VmvFT3qS[ֵVw7]ZڬfEf P nV,?,ѬߏoԖa˷)wQ9sZ.6ChQͪySյcxMe?JbsrdBLݥ P[j,[S o[c637Č*2׼p+^4 *vʞ;0܈T;()kKӓOSOZvNqkC9cJd[呇:+ *:*u %8$I˄`ÂT]<@E#,Q2)&yJ^fSMdzE@/NPlml `g\wmD\P/lGḋ(WI0 hgbL0捥pib\'|r9|FL ?X>izpx7o _>I!ﺜP 1HDq"W+v$$ ;!D "joo$Vkۛdu}l\`}9-v,o;~BFi"eq~zBrQ],EJ4n UPYLЦrV@Ŵ4t, تϐ\x7ljh 28C^)s$ g'RIPDRBHZcLH,giJŚ^GDr qs5^rk_7H "H!"'C<&^!y!BQ*$nI[RP)=1&`b!@0H,:&%4DE%%X9q"XXlelBacEE\%"|eyqhS.@..7#62T6Yj4bNN+-(sYK9*B&r'|5d"{YMtP+5Q'{ĄaU HЦHGl?ry(]lulڬ,j@ڝd[EJ()6 =mY[।<*aJ2jќuS#D A1`&(T c$ՠ+ŚkR?5x(Xl}l0";D\f@*)MHH2Q-KȘ[*F`#e95ȍA W%Kc 2@Q 4|Xs6>j8F~p9`JawC#neZN vj%n"[Vgk97~ڴ]cw  h=" m[./l ˟>E56hw߯wWՔ^S5N_0^h0nƉ =,[3%6ޓzt캏]>5ڜx $ AziN?-Ψ$-k<&LNrT*氹p&m\"0w/֜ ȻP]oν:p٪u'Y.y5>]OΚ2eLCFy3:1TH.IJ6wd0IF0o)LCeKe;Qep 8Xzt^t4mѼv--*qWGGƇG{rT'p@)c3(;nk"KY/FzσkǬ8e!;لT*]uw1t+bLpF4qh͌F;kIY99?gZ3B naq)JtwAMwc{ԑ8jO.]ui1DlD[a:ǿMѿU/Br'oup|0̎>`]9$AMc# Q'~Ƿoձ2ʕ}[7nɓ-}F`ty,| x'Gm$hXsЇ( ttbߝpܡF8p^!$WQNڸWBD*_ gUe]LUoq)&Nml-uB6')kk8!4ᣱ&A wu=%Iy[]%jJ;g7jG'mc \ B.;4k),!:ԣO?ӣ+\7'D+-{}8:i(yMSStztyېI+6ǥן^(89x6nz>پg^( 0g&{9sB`t_gZ/.rrh1UKRv9F k" odyo1$޻-'ŇI%O*m-lϜ+\j裇RUȈ 9w]"yۨ4g |<[RZ5ݹzPܔ|E )Icsss%48?@BC&t=Ӷ=^(=HKoVv|䞏\k޶>o<ƑtC0GaUo2\4bt]Eg7zs-* a1hsǶ.dÒ}W?_'s-:xhdxC_s(m"$杯_Uk}Cg _dC\oL2HKįL!%ddr/??TWrkYxJ P?ZHNqejIECm)Kz|1Ї/[z| 8:)8*jJڂ 'w^GpxY)GO.Đ .# 1q?!݅0n?ttW-ƾLcV_nM ^h]l4oW[4mT2g'G~< "k;ON@bVQW_zy8-q1,o q.6Uڹtd޹7ł]z=޵ñ*۵e{O%g睥Ǧ1οgb^?76Ui(0ݘr>^Vsvv(CYsw?9[62m 4{7ﮓޏ7:p N βrz;Oӫ0 \?l̏AKB չE,.OǣP})M6{3g+ǐ~#K;ܟh Jp=bM]hp(1rrMY[Lc7%0-=d'7s՝-4 lRH|)g)$+8^,Ͽթ{+ܻ^۠3?%w[I 3 /7mFAZϛG ",..o{fo1vs5ߴ'/<'{9ZVc\}GOu {b5o=0cm>Z( S]'_cW=I|E:UQʲ~ebN f5v L=!1r[}x-K]|59goMn]1/[]Wk}ݑVL?6-OkZ\I{bRUŞ)&krkKQ2 9e#TFyk9(甅bjR*֚T |;v!\TJ҃>CsGqJbnef[_jк03v`,Aon2LNZ-s('d8SZj s$mbl1P4b jd{sRe2IihU<֐Ǧ_NAlwJl}؜T &Zu$5c*$Lk4 JaKPb4vfOfpW½C~@c̥aMzbLL?0 iP+B `ĩQ9N?T!b*YO`V1gxB?nN@cU .k*L+)*fttr])'6TbpO!c;&saQrMvkRlJIlj7t͌o&U95>ͅ!l~)-]25_; >PTv#dgŖbR n|3!)詧nCB+7% !+2>Ia 0fPgw4!xL~[0MM фjvJī)VBo1 x&WFG x7ZX:mqDǢƆY Jܪ˃:}"@c{ϲı&6V7V}Sb!.)h0T+X ͉%s]pgDim#*!eO kF[-f=a=g3*X4X@Qkj JRh BHK PCO-]Zc̕a$@Ʉ5` Lhppu ͎s H$W %6i2~+,ӛXm1r6Np&’ m &ڥTT qWl%W'f0a.ҜZ dA eEڅ|E\aS(X:gĭ9sN 5 .0 kDm )$2"D`gNTD3-Wy0y=XgɃ?VAbo L:3.pe 7),BY5+&V 4}![m(ةdGQDW>`$t9kǶ*5d6 %٧VOc``+/Ռn J%*pBYSZ/k+l $Fq{0pì[ jE|lb}Ѥi"833(#n-Fviё1pNoD UD[0eP;,5Q(֟O0`eE9e l:9Z򃳽,. AmzTV& o dy@X8piL`(dV%P|1YZ#T2lx𘊐'z8;u*,B ^4N"iUy`!|Vbfi 3[w~4\P쩌9c+xm 7`= vI!euc @/fs 6MCQI9vL$`lk@)@`*2{y0Ӭ^eۢ!%DmI Xaq)Ctu *+f)aenVÌ޲FKgqyBdLgD0@d$e#k0q3r6L o0YAd&!SHf_42ʚAJQGD$qE *="8+ 1S7 |0j;b A0 'V+Łkq?aidHg,I& jh@dV B@*xVMeEV'-RopVc @5 WFKKH`f<rU7\viazwy#|: wMRaD&ɍ2j` .@u6yYpdli`6& ݗ}(dJvpu55ZST^DpY=p4h&f cfI毦g)fs(>!,`g]r%5n <XЁ!0e .@'Yb!` 4*'s+ 'IjZ; ƠpOr\IkmFEȗm=lnؠI@6ZRwãc+q,)r9$gg<° |١`E| BrᓐGgÐͫV0TmSX-s yB0$=96Zqr=H@JZ]DI|0g4u!‰!peR?tͽӴgη29ff,+!DueMF)aѦ)9*| O@fa m[5baZHWcX/GJʾO=`Ja90#FdPK[=zrsx,K 8H -0?x < bZY*E.5l:za&>].o{QHJr*2/߁*QvW8hw"aFVEazU_ p4 w+წ,K,vA2 _]=Ȕ]bfU"Q]\V{]o_?[:|(j00%`a%cgδIGsrKՋi0cR _&zbew^Gc(l<w#c [V8S2*%>^Z=I{NݬϵAf4|pn`ܩs7@߸ =T T3z,o`UbL!1@`E 9nHǭjd=!&&d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2.`U*=&0 {)@&SdT*d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2^&цCb@ 8&H RjLrL d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&Гe8 &P \b-bu(L"-c*Rr)28n2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@_fj2zu:(uz}ZM%|)4$_:$Հ y0%V'.KOt}FrvO˰S7`=pY"=.{1P10m9!z3x"j׍ :P+kM.9疓,BEs5? o^?=Yb@hgM lN &g4Ԁ^utwn -t*XM"c4%p!!č(NhYG_deKY5tr= o :x?~YhVdk:̢n^tl45k+Z﵏GCƳM4%9ð?9O?jWh< P>(`3E\J%>Sej3Eʕ{1>t3"Ń9n>eT>L Jԅ(hP I@sOW VFWŹnBp!7ŷ|˥;wߛmٞOcnmx5ͼgn}658KM5>kC.ßٷ[ۼH`L>\LѰpj v9i^dN{DED *# 7C4.)5)pYrtMP< ,M(bΙJy$h#FۉԸ ^͠yMZ`@]H}~_n[ o#{$' uWO՜/T'VUb<>+ @$F+IY<; 50r,ls` ; N= i>,Gt"" RH!Y<5Ø(ƅhx(ʣ9jԯ `QĹ;7(n<ҞƮax0qt#}_:DQ4c-;;LQj՚wqohpʌviYb[*^A[VD#]0S0 b>^5ք*},i}*Km]2UeSIg'7~,~7B Q?m'Y}tJS NdNgr$3LI'-Aor+1 ˣ\E],* %3,qOI6G''QO.9m4_mJ~":aG6m:ߚ/$ʻoLr|THvI/[@t9,yZňtmr*Ѿ_k*񙖍̌o )P#NШ3SKR4:y#ZS*7J4cƪ0hMCv۲uا\W Oj2y_sOۥ'5('L6ydIѢd[.j'Eq*wUBPYh%˕J+ $&&&W|6qv!A`xXH79x8ܟjQL `ґ1ad! Y36fW͖Z[Mtޖ8wu=ߛP6݆aY+M`k*rؖ6y p@]I2,#-[-p/\{Axr6늱nFf1 $f{@P)1%>"kZs֑(?/4mcH*e]GNXy`TפxHDsW.QFd ]C%>[F?I Xǡ_^Pydc^?N-\44Y3tƝʔeME G5\oKf}ϳA:c-ޫi"7ljx׹NX#wHvtAN}#7s߿:a,!M5暄\yS5`YU7`1 =XzAjѣe)X+#:odSmEBMs ^p2'#rDĻvS~YlZ(P:봰`zg{ӛ_~7/N'/sB9y'o~|ߠ)D"r?=Q?_}ՂU57MբV:|zUMnwX;XYk]ɧ]oj ZM9%]?ϧCF0+kRijW&p) 4Ņ/qopkHXݢ8v:f+`ܡN]h ;s0`7;ic,<lL& I H2h˾m _Y  r~N]C9gl4>ExڂJ'eK7)+3(OġS#yhBiZL'°/}tc\l!d&IiY܄jej[Ӊ$_yϭeb[Qa4|eLmxRgI3*EVTG{ GG<*2q^ .ls,9W y͢F/o*@NPܷd]T%x)1%c#dH"-Ĺݫa4A?^V+{J4m,]PӲ_D9:He݄ h)c)NeY&@)H CvDêAxE61&`Ց z Q!'BIy cv?)ÌeRtYF8ڗrg׼nc`DdPXr[bE&N I),X,)(eFwmX43j~i L,La vlKnI=U%[VfJj XXK:МXkpR-Kփp1787 ƣBأ)ŕX ƣRڮR KN/A%l<\; B:c*!<Wy%04/jj׮_v P:΅2!fSS)  Vk\:Qk:P<;Ǎ4ᣱfih`W@5*TYy "$l*Eދ٢bEՎW%5ޱaVrҵ+ ^|*Wy7n2w eNiUqv4Vsd;2S,@D'UB]cTTL$IFNROSmv\5U/Ȏ3[?#tdcPuPa qNS J)p}S$QxL0ϣR 2 H%傩DU ʁ"@s¼8[ xޔčw< sLp>^Og֟{\_ߓm]Ml_oK~9GlL[O ׻$9O>4%4&{3(Idߓ&%uжE \'v.bVCdنΰMn=f7I3M̤"뚗~͍fm{úgϝv97 QBM2ڑimd4ϓ6S'!1JR%rXWyƭ:+ nR\B NzDĤ[.4!ѩ9Rox`]E#,Q2)&yJ^SMdzE@/NN%Z/67vJ4) /%UvfOPk`rY c <Ⱦb;U~@s=G/ce62րap9ʀwg0L] #`85wwUy7/ e_VE ߭z¬NvR w9{PsTyQҰmO2J{Ju JdU䥜, P)HRpe*W3W)JƿS>UaS)&cB&4(Q27LE{osJ~^^8›HQW>u<Wm/Ne)Z-,MD+KW)R?"iLB`] BDAmI/>Mbx띇}`9DYo{@ܚMNNzd]bXDRQ T \^FsL+C}ljh 28!s$̎3P`rE9"2Υ :Ij9mT%20EQg89""t_G=|xuO\ⵙ>tpZ7TkSPLE%CK|\lXr8j|mU֩0/-Pʆ@*m}ou25YwXNurN*QKP  1(#M"a֊I L } {d(K$F)R9,+*:e^ߵ!ɿ: Mg߆.8&b>&` ֪cW'.L@H%4Z QJVl*Ae*Odd8-i+cf\rca^kbBRUټPi)g>tu'ΛkE&Hw99kޥ]n {'\ǒ(*oQ@2#'*\{@DO$yL"CB4UM(THܥWAQdTFHS#uLֳ U,.!HFblFr\;b!X78Yo.'۳lOB[Fq7 *wy9p3xXjv슇0XRe!Uk~#7Z]7q '@ޏ cN6y0o f@M.6UI?G &߿y&$sj4ɹW?p=^ n2q|LZN+,ˤc.j(Y]C@YMkϾwwߵNA?4pՐזжSmW3,^{L݃;U -=>ׇ$fts ^af<4o0O&ff rp{\mg~t3~5G#.-{7D=;k>4QU*^ t'zfF+c; r40AߔKk4.pc!xT+F^٥B`q+m{i}fUf7tw](Zњ Yems 3yb!et}]YܺjZ,]Uc;% fzk~>G5ͮn..-V~lOgC{f:A6MϚi%ٷT x.]L} c3R]8C~bW.0vdOtw1*Ul8#{ݽݻ@:oB0" I)#@ ^IƼωS HBвKL0cg`ZȄw.-L%ʬbu5hubl xK=k[&zJCiUM=$hbx 1 ic*:H:]b &ɨ 歱2%byl=`Lxgs/u΀g $pTkƲJv.&g;\V Ѯ^s\9.)Ur^k, 4˅seu}phaF$BOt!`-TFz]e嵵K Gzr?e4g/u[`Evy3ɩ[Edvsl{ӈkꓑ'˺y3>K/ltKs,.GYմ5>1Wˑ S?Y_bI7ۯR{ȉiHS/ٻ}/?} (z>p5U|@j^ۓG)z4XǩjRڹpN"?4"mN#:DL)e@Q[&!JgbyI|:w|_]F[A# sF3m5=k фXgq9,]R*чX!ֆS)#D\=D \ei:\%HW \Y΍:&vh*&Gî lj)MWo'\1{xz\ճ=8z\'2ȟW_`Ia0A\xxڸm柳¼gHZ|-K@C)! G$ @Q)"f+|VS62}|yeuJprL*q}rzGs0?XN˽hs5[y[-߽jwO`IJ_w6v\}g̣u_sY&@- l]gφ?i1sѐerب l',x`q :1+X"61Dو¿M"]ۍ Yuڨ=L΀("-0_/dʮ"1f E1FSD"HdT촔2R&$̆bahE,4 zS1:+uBq)ȹ?u-MU$p*KtgǛs (福8~;j\܋*;ܾ3Amшq_ Ɨ?eZJ۸70cKm Ѭ3>*Mznv=%eRY*kHE:t)`tQk`M :rWMk,_VL]֟,!oo#~:lNVeLzm&.ڹI90眊JiP`hQBYƈUqD vxbb=n[F;E#T'AAxVZ9P$L$hl* J3<(& 1 D f0 xB[ Jeru&$ :X4RPtI9+ٟ-ڛbG?ן.ܾazO87&bDosl"yëτW*87"hFV*4QĤBK7objB-mxʮSKL/*[0 $ +gD"0TrE$Ȁ A#:lRi lm0V'#B$ .9neg܏("8)քvx}|IJh?54v?{Wyhg42Iu^_oj?K"uQ|Ѭ.]4sAJԼ!+swI$-O vۙ6׫}4YBI,qEEKa\ >Y b: JPN*iQ32.Jd*Pzb MI9om$%3rVVMbV3|)di@δ4RDJF4x9xPbܰ.Ez=B PIA j"׈8JR# YpNZwP3;;ig'] {ͺޕ@H"/I҅@v9X^.mBo^ Cu#kXSqh|~X~u[ۓB-veyiD$mG,ӥh"$(4<^`aL%t.)ƜJRλeCv~F Oh[3YW|·xhJK"hȮ# hkS[1fd-%荔]&ģ;cM^ltt!Jv,):AGuaM E/iG#W{r]45d 6y` wl2`tvBc dH (Zo5{]m:$a==&/j<>@2"o*1!ȐUB_)6'M`.,b:F6qTҎNIz3IB(wE|Ϸ7,4WI:K9JܶA(Eޱ1A2Ug *-C(02{' IɘW GF8,2YUBTĜ\21`C). ;wTK*dIX YCJ"#) %+D*p*leb!vԭ l|Ҙgmna?=sLYy8O-؞GتOƽ6k:#% V+Kԇ8N_^QMN}jcyNSBֻfNVبt)!R#ghD'b4f5xOLVP[%aH%Ugo1/ ۙWlrFZ:;RBO&W7#lu?N||nz<8I-9 n2 FvQČ6|f2y. .Z[N5#7cfY-aQ_ztf?sV/mU[urYK0)Rܶ:gi&!aY[|ګG7&/w2OS$~n4VoZCWMs{׫4tWiv%">jA\Y[R~9_Giȗ/f۠OtZ3'nAWlIh5K'U!z0`_V.iy%v 6Hu\`7pEGC"'& IKj}F]qyB},ɊVCfFi+$SP 3;2vLh>Lk ӒV&z' ])5O1/>['֣TֻMN6*^3HيzU:]nx牷e#[] \( 0؊QlQ8IK(j,3E`G`٭K(ߥZR*"B4"P*!h ޲1̎W&#mUs ppe,C J&^HSqPULjg *n&8RIEt/"ew%r(& M~rmָO]X|V^rZ.dm0`2  (SM&񕝌mdn '^w ں ڹ( _m>L1Gҁ7P`H`Z&Mho)5tOU;=uxd]Ơ: JiS` ֿ&&ew ÌK InhI= @Q80[ƀ*'MW=7rNm茜ʲ\EmR`N?qH۩Fnx}BIwVHq ڡst4c!v;=w4[&ܔH&~o(,-~6&f]io#ɑ+}SRއۘFgF#O#TIՆFVID&%R]ʌ^FdFYGHqdm_/6 l,L#d(nIp4P0E{XC$x<м #z!Uxd!R&R/5eDDL T0pHάggWVotH[qݳI!$۫^Gw6Q[+iůI_G]Gґ78?WE,wFXWDa"W* L'vE52)´h!^!ËaÏ ZEQ'_4L!KNUD0d)Ot)PqL%T8A<$ћ*O䤀`XZ6sDðq$30dg-4!!> o$HoKɻ( Ź)VgfrmR%ߕ,0|CRzZ`3VD}9< X*:H3r2[._#)=Н +`J>D3}i`K)RAUa3λ3m2Ϛ_.]W^/zaYj{l5! _@mUF4>=ֺʆz8pg 6X45Qa%AT\" tƹ1m7w1}70Qm4^.hIS o_K8%hisB[8\R8mW~C>I1~SP$N|qE_UDd.`,/6 Ǥ\ȳID 6TA5zV/+3%etq402s[yy0Ŭ-=4ms,L;P>d[)>잭hE)㯓vC%^,R{O˔a`_Yvpb\!QcX> :wHqt2n{-&ܳHI籽VjSސ0_M6_faE6+ 츶Yx ۏMB؀6ط`o[nbVvziu^QrGc .Y%T K=5#P S`uY;x}8vfna 4Ci™T>q6i2\,Oi=ۈwt<(>lcc+>wRwӟEF` p C)[<'8-ފB5j\7 \vIIC埒|8IgdoMуG7a'){էt9ŽWlOy7iگݤhFF0/j Y}. k7@t=%8SY NZxPs(m05»E3Mayfky,?MN1TH[56L6:/>6?ԊFd~޾TEN tmќT7q6{?hv*U$ ifNwN `\MEwUt3\ ;>=7;it &c:I!;kH)EԒ)E 0;^]H ?3٤^L$86,2*ojfC|c*kSa/_zͱ;T_~1tIy6N5Oe]0ɾiptp*s{u̖Ԗ9˱5\s#:ÕKig(#|~)g w6*"٥BB,ZZJL)0" NߖYӶ5+$ BIb P$ *ZSMw3>`b5 9dkv 3_祳7>$W9$ L4:$3$DqDRhdK)F@x8h"U9ln噼N8ToKN擼 9i,mg߼}# S*}K 572Cٞ,v0tm,ō$(7sF{F + a9h Aw*2)1DXO+%$rNLŃ\c[<ԙp˸{5 [vF }&JњI$Eh=Ť㭎3 RM6_5Y۶ٮjg95y.P +]\hmݘ1l3_q ":3 i1#ђ7wd-6\ ͛ym&`3jZP&RG.!Q#omǔ~{b16[PBVpBMQYἾK}#P&R/5eDDL 5<D3vqm5pHڞVձPʖzmvQ =V/xt٤(T0!J !"]2 bÊT^J3\2 rhWg9)ܮ,7, ݲ|}.`\dKP+E{xι ZRq+Khg729hY+oiMI^zߙ~odQ4ݗo&k_k߿ uUi-6%Sa\rFF:CoO>tCwgO≎}vޱKg/3KK r+3N\L,c05%a0쨢+&9NxQ>8^$G>|\ĨARh;LS>K';Fv?T>@K¶F郓(I#< Gh<;y\]lW3 ++_WډZ0yQK&R) gHmߵ2zrި'?+՗'L:jh7q6{?hv&ǕI5*R@>0w㴀X< T]p*ڕO'ٺ%R#+X1q4pX*IH1+ŅJq>J⒣Ѯ:\%)9+- * @\UC$eW؆]Na1essngwmmJyc`^v<,̼ŀ"92N8}lKn)K;@MuY޹2.W7ݧz<ߋL7??h854r1LH&lړ@BJ]꣛ fE*c_&<13_`ug~d#6O28 r~8S@:\uztJWnؘƗc5[6nke5}\%e3``H?=s ?rTM7g~5w:%`_>AvFY(V<3 x˒2yKV\%%/Mȇ7 gJt ª}(9IHX+%$$U\%Vi8uHXA }^'/K~6s6 V:?w+~vG?yP+VI{护4 '8J Nre}r8/yԻ qUK8S ezxY]bSK:]{w51Tft2_"3/&$46}]t\yD2Ow3ο/ÿ!So!/ M)1<- 0R DRN&k'9X;1$[ w'c"Kh_Ga|?tL#}dRz{z{ytd^ӡEav͌xM]n S|;[bA?vA+wN@kϙ. O}c9@!\E f=p@.(*Ӓ>-YS6e!0uJ) VS|e~6…4SHuȽ+&ԮR8[6=䩝>npc_+b˂n|o\!_˃B-dg5+ѥ(čKi)Q0i KX[-ib1Zf[4ŢYO&wP5Pj^yRZRUhhW"! zG(NQc#=ylO@6qTsRro vyq~R#le2_ґQ5Fl&vH-YP,b+U炷F^]W-jUZϠsA#AL?'gE]:g*RX5' BRj9+~HgE͂j\Ca yh`l|lt Klh@0BIVAJ9(.H$N<, J Mh_D!QD#Wٹ:ԃ֩V}@=Hqњ"= @Q s 11(d9+B,BT6&ٱF}BO|zUCHA\g8 jz5HhbbtULE"wL0t$PxE2DBi8,L49:}jy}7t'L}kmk[zc8W5ǸϯKVy/4Ҫ-u:,Ur-KԺ$+BRmh.͂zWr.1ro->%$L>yr(sKaҴFBFel&v44cO[(Ec[-<-|m"5@SGGI;\lٜN:% y0'BE)@B\قU'^5V2&Fd5ʺdu:; vCP- Oη>m%v@1Oj7ӎ}lkv`7iGiه/9HA!@DSwxkٌb ]<]<{J;L؃W?2]#1{f/] s>6x͡Ә_f5p{Ig1GbZKXġ^c=W[=sbHv4㰩yx)QekIKMX1c(Aii [(B3 [4_ZV2纴+Z]gGWYmr =Pε  ȥ UY& Ay&Z:e9B >Ֆ5ξ;0%vf2mGaCap|l \d@XŴ+E9Q-tKgҥ39t&I)I> eKDJJv^Ř3&L)B g(.JEjzLmwߧ}V`:_ft=_wݱãE Nt6T &1o*Mbè5lzs8->AqӜvH-wQؽMYϏw>6C -7h8 )q|۾4=ox]$`<u;]a|,|\,71onR|;[=,N +Fش"/&[L^F`Jq#Rt'DtPL >@(bIA8'e0 4JC}Y`A>^;1Mq5QcGbJI׾Pߚ*H *1y)  UVreէ-NɓUPO}2B%t6Pj~Zird9ςx)B-9g݋[\OڲcO9w?vLE$9dbꜴFrOXF d!d<sTwJyegdbs]vuZ8<ѩC5uL nGF ~":\ӪI*0ҵKdj@ZUAL)`\X]BIj: PA{-)÷kW4l7sDCǛ"qȉiIRȖmC. [ˣ₨$O_oW{gDؤ6>b?k@]7vuѼ!4^oVt_g`x7XP feAgqjH_ n޶T^XhÞ/:Ҳ;zp2UJ2 Bh>ϋ_PVǫZJ`^p*k/BMVnet;ǟ+o9I9g6u*BiN".*{ \XS%wmvşk;_ﳵ~Mz^g??WXFz-]?c[܄a鑘)(-oO.dbnR`z24s"RHz K QKTÁ<8o85؁<1pFBL 9ƒ噅>UDAm ! _?IGEDL䘸A FrVE[ZCݙ]> ;|ߣ_rJ˛AX f0qm#7_ǝoTǻ `f`vtx8x A9^IlZsF*`*gZ,Ť8BKٜx~ržPT~:(}Q \Q)@@q)mO6\Id D t2M%w%ZXmKB#pNIl. 9%4M4g;6$z.r]H5H׉њ[rЅJfmeͼ[lY/V&s;CGz&Mh@E"TZq͝[NA%m|\Of.{Zmn]kmH_e69H~0^'{.q Z">l+W=C")JJ=!3zZ+N g(OQP# 1碲ɓ `D%NT0A1N#Eq,@e5aHet?Q!$ &me9k嬒?I?^lcT@4 :Si Y!<)JhQN_ z_HJRnFi@0k#8D\|:)a: ~'u~\x z7|h20WSh.UF* ) ڍ( w(w㑊c jccպ'ቕ ;.n}qL*nC۶Gd7JIvI/D[@t9,y/ؙ#"Hte.߄=>kE+|ek 9:8T(Empx yd:jued';hC+e_ct:Ά5IY5rwv'ҍ''MBBlL&L0C[O6irNW?Kz_RNY)4 %@K JPZ (8vrH0 {tj"P\>@Y|3?iEe]GNXB a8Jb^\h,ce蝗LwԮ =&>;F?W7)ϪFC??})B剱RT0П[rL&cU uIME s,k|o_(̸WݠKs>%X4DNtrx0NɅVSNx Ws2:-gFpy+ w@R 'W)^8.3`Gsx6.B,\1U=׮R%Q‘}|=E/_.6c. 9O|rIإv<&\~DFbfy3*\k}mutr9f%FQ\OA..v1K"dOIUͥ̇WM== Mݰԥv*P # ̣-pпizw՛l{9%6^wy} KdG!?z:F핊?'koilC"/ۏοwoo_2s/o߽}VO8T&؛_#AOw-h[]cM܈]mlwW]#>ja+斵 ϊ^wqȧ`4ˠO^tZz5 7Eld_5{"QQ[TijX 1 6Ņϫq/"q+[}U,[t[1[S0<;C2lV0j T`w|7Ix8 $be$pen ֏/Hu;uIS圱 hl(Q[6 Ix@99uOimZN:|eJ%:%x'y'O$x$8E>)Y#p)dTj ) -BuTʭ>K,5˷z|+neɷZĄ#O7\o)&`3PF(2./lZ?S= ,ut%2] '_'8RI()!W y͢FDKQPʷ{U F'( Ds.*\[~"%)C$ 8$F: Z#~׸W~/i^WzY.BN(i"}ۘUӼ^Sd"osһ A:I9Iq:(@ AFAxDF61IKZIACU؍ @j 9&7L[+C$KAtIF8SRh:ּ]PLVhr[bE&N I),ceux#u]F3W\{5ug Ja1huPa qNS N!pD)|L( `G7A0-d85>K=6 ȏד@^Gw6[ʼm4{1-$Q%ɕ4r`0KҘ<$lȣ%Ä F8Fȼ||gtQ {õ\4u@*8$ &2#N+ RZ@hU1P-WIАHZs&Q89j0P<՞Jg kY1F!41i{DL|SaH>\V i}3Exu6CעG/\HHQ+QjҒkKXZZ:t9 yu3M.cz6N|iGGP6qsG/ H*>0<'Q94O+'u/Sr:g8}fK=JNhsDJLZ>h:>FTծ_AOѵWwՄc n?54iʌ***dJW^ \&M&pu=fbmDn As%m"wȬu=: NdP*yV5(ބ0yn k^o2㚅Y?> sԅGY}>0wpEuVV5 PŬH:b<*5seg?gy JΐK}7&w^|45r_ά˃Mv=H"q@se ,3bX,g#mfñqZEBN7V$T!Ktx7|S7$(aB0ْ}az2ҏ ۔AV iʯe`.L̵\NB,W3? wQ[u8jͩQ),墢L^ ݨ+9L 9Meşg[eBI/dP zp&{$ttuW涵]|XD J74zY\_E;TMD5wz3Z7?Pԑi}rQ@mo[GNO f8I ?Y38/ԣoZ/ܰm L}JPw \i#$sl8LFf= Nx-1*b D  1+r2j\\FvmrFjL0OHe22:p@r)+xj#o]Uz"qKsC1'[]ȕN=;fnWL]|l%sǽVէ[ k칋q/,C.`R~q JhH9pVTK10!:W VGTke\rj-#ͿSs{j]V+>4A5wuKsv ,EkڏuZ.vT|A1v6~DZ8)o WZ{ozn6x?~lXȴ{KAlPGNw{3~yb{?jydy뒡]zvHyF$B 'ƥlbE-1!w$Ovkeʽ}*ɪHef2u DHkFe`{Cɝ.8aŢ=7 o%=Ra'} H'O#Z9FšeMҺ6axo!SH{a e(L CF&29 }U<җfd3*9g1YT Ygb..j ckTީ}e7Irzq[.~X ݬKų^-f )t贎uZNi;cu촎-ёCu)ֱ:vZNi{!i;cu송:vZέi;cu촎ֱ[Ni;cu촎ֱ:vZNi;cu촎ֱ:vZNi;cu촎ֱ:vZNi;cu촎ֱ:vZNi;cu촎q2 }&4+`:}Ed *RqH!D$6.'X=5{, ۩/[ɔM6B}Qق(DDaSڂ@r @TITd6Azlږd=)E!OA 03:;] jBunZmhQio׎Nq9eN} ~Cs|v*NߧHSwz˛oT'f 4A%RySZ - Bat!{P_ȖGxkn3|F:@(5Xe "1P@]( ȕң *mU(Q9 X vIZ'$"HOl/yk)jgͺz>f+-Zk?.ms 'MfZG@J0F,r4@#4th|-4[G3hpր HBJm֬>yB4PJ}dĕ! .VA-<=뤾NZ+~oHI#(ȓ@l9Z.-o^J y#kZ}n8v4{L&vSOMӭFLӥ&`$(4<^`ՙ 2KBiŜ ꡜ4#B5K/z/LDvU>XV[&TbQCvE F>RN䍔!Bp_c <~/LBvՎX; z6+ƣ4!ȗ %(@Yon @D ;Y$`G 6l:d,>*ɓ9eJ?=YėK=gbʇ~w'밳DF"_3O{z~^1#3+2IlEG=gKYG=1bE4˓?-;XT%s7i%]̙|?REGF:mXҮšF|1YOcQU:;Y <"-$'6{5qꔸN ;<4Nyf]w/lO&KoO[ƭEkd)^QU 䠝!(Zo5=^Y0^r4'G\?;?24} !S[iB6:&ϡ!ʘUBE_)ʢ-!q3&TKD"pJ;8SѧZKf&8^D֭ eq/r'OoCP6C u+s"`d *,x#I&U! 7~AGgsFW̺JCQ6tsFt# 5m0xX#("Ujܧ(Phv>(VTM7&%c^-Bh}V) JuNEФR\?gإsMM(%aCRVBH)tTVV&'JH I8 9dVvJn/mHcOe}6tc >0>R&lz 譢B'ĶdDztzgza,w`~faynf*DR$l2tswj|2K汴 Nr|`_GǕdǂՊLq4%Q}"ӧ_,K>+yns7:󣘜?xxS\h){5D˫':8b<Xcg_r4}v1*| "gc-ol=G4{Ԓܒ67nS3rs3 \T+/ΎG#XGdz@?9}7=تzMn|"[״|ZiHmذ [zI;/XMm7,[l:[[?/_՟/y7߽| ҿ~_| :8%`Swr{tZ^Tn4jZyMӺS/|vmG c?NvuT ^M~8Yv:ZA|1nV=s38˓ wboh:^9 ˣ v ƂX0vP<Mc,:I>UE0Yx W+W>d ZQe\`1B6 5Q*VcTD싺Y}jU_ &L50 >싊 X])k~@l [3&TMJh&_Iz.[Q_|(蕇A7`(^xZdW 7ol%7(IvТ!2m % 8HXa#U,{Tej,"&( -"a Vh˳y) -/܌)4}(JQ@ᔓb2P.f%y/ g^[#7ƠYw})i\ Ӂ0 (֚0Eam V%RP )`!JA]u5e4%וׇ YĈ2Es3g#>^uUq3ID|B%bI%t:"Q^zWb,ARdp19{^k]X|Vrx$]Rh h)XZR0#]HNh_#ZA-;n@[k/Π|$w j~=%vy'xo T? [-z*+a:ExQtRxJV@.h!&,]\pM-՟^RD|. $B޸\B9R([`t,lZ[fb_Vif IƎPBc[-|P[Q}ӳ7Tc_|xz4< xtth:?-ޱ.jԈ"3fU0P+!&)-dymF( 7Mu37TMeZN E@:؂T:b7;%XfұՖmn;MS!jAE,Hg>@t^S2Tb+dF PQ:I$=ں]EL,bͺs=uod}cW-bI|r!qҁC%(*@(Qg2REU$xHAc5} /ymLF+n:,=v&풭gxx%-SeͺsEU'a\l&%E.n]U)i?䌾l2&UL㽶Jb.>]{J:v=1Lغ()ȭj6ކA^я9^#$>%hW_zd(=ѤqC)_Wl3`2mwN^ml}۽ʽ{{0=e1[@]dC0jCGTZpK9oL ^~Ր!] m &w(u(uuS. +O>yJtd>g#ʣlq|J) R37.bQjǂ;q(iM6 JV7yD]ioKv+ }{R#ȼq0xL^L VD$%/ RluWWNS}Y8SIg?,VR>hCP2&\\rKbFh$zM3H2CBG$F豄R o 6)sho#^5G"k;0h9sZYv@i>~ΊI,nR[Ca>*!6hl^빨u>7㫲䋞+kg_#R(76-H2 ;rP{"VQz<iF`1VYEBkM@[( F刊`) NP*:Դggc_ W@"8. ;ձUf9EiWt@2qh鑛0HyQ^J€^wpO#h1 'w'e&Tg3}1~I`0E1EջoMZ+mm͜1B2(Hޒ`C(YΉ3Q󒲝R*P؋Å9.` l{VhUt-/[<_jj75 :>YYyucXTnV3aT9 oY 2w:͠7 fc}p?Yq??05~6x3ͩ ݍ]+mssVy[un2L@=Q~(;*1)gGQ5Qp"]jlOs&`rX2L{RD-WSPc^QXOYGaw׽8 aύ&y -mV iK*/wHH#H 4;F'H2x+TVH=QൕCm#&JulYa1?3@جkPpI3#]ombAvK*Դݠ cx$V`s oӂoAu!(i~N~܇~γRWծueG+e`=?nd<>˶exd$3#u?V,g'γ8{ҐOT pK$FtKσ 'өWʠ/<*fKCS% hQ&^If φu4hWuXbtf詄&sawW(0~}3oa]&,h5Ҫvڸ\f2%cD= b1Qg#vmB6&5侟&B(2k6臥yWGPRi+9]1:d1 5Qx-6E14X;lX,B_u~՚Bŋ**? μTw'<)Y̙C:<A+i5(2ù&;))V'-;q9=,NZCR| ;ܡZPIZW 0#5pĥ-p*I)qWo(Lx{*+q[*IHIgNX:z;pgZ@`CA\EZ]=d`Itp#e#ME{ X@ٯG W M•Zuvc*[WI\.WICJJ:2JUXUWJuwJRR+V@\[sv;\%)ue-•&6iW 5p9҄c&@JWbá(vU2dJJqDWz]ybaS, ݢݚ")^;e0ԕ]wCЉ߼&d|ƌ`s.`9w7qvҮYMOjah/9v3|%gS@ uAR+R3Ey;cw1|XH2`Z7v{BΕeNFql )0?J!۠LșAwM7}Ng*k4E?wnYA<׆! <2ȴrR@QR9ͥ$( 5*CzNHm 2G1yq>nzvuǻq]nuQ{PS??^. 5QE,w*QDa"WpSF1i#$^#"L;"\~^J)XL!YaA`hñSH 2RX\X]~FDŽ9[ " 4ywA2enqmv@)YK2^T/XѨÒ0'xa!z)XcN3V*.iu,ՁÎc¸9O33|=KC׆v9$LRs4?,첃A!gT1YkIJL ^Aևe&4ˬ:EYE}`/~0Q77ku>1hUoJS61wO.lZ2^,ziş0 ??>6J2}b2}7Y35j"TrwM>kbu9쯥f! YP^?J5AjL{è&jTTE5U^XTie'!8Nf{2S\s{z) u2w/)ۋBhRca6P-,Ԥ~^&E K#"Y5YOoxJqRkrJaO^UYTuϖ+͐~p&mMf(uZp WA˓rqODF@/@/Q~KGٍ͍lQ3;osR,£Ǫ+3Tn݁_dEmqۖF?"J1yR } Oh)W- 3Z21E[B&&i)?Iʙ]ķ23lD:Dri; IR3g pDPJ F/z" ,|{CyœL Ɂ+c8DOBH%,F+9؄Xrt/c ƚ89tTC*K|AY1я;nވ.iK֛:/^.ݙ/%JHs(7Xn'EKf)NY)HpE, 0䍖NrcF ({?:w q&C2_`?/x>D`"hgQ2 r>I#.+MVHM=7<]@ZykKeiOUλPjiB%jZY 5oxzނrgf"H&y`I1c)%DDu"AĤIh ^Mr]C3@+,Mjmq#8K(Iz4R ܽ}}< UjM]CaݫƄ ̫E,dK2lRKm!ȹ96\KG4#HI2IBD@)%2"GoD,gQFdVzLH-l/Hu µاz-·gIXßsȁ֪-H GilV(ɧvkf"2 Vꝕ8Qy\{v3:F2I`QoAezykdʾ)u%</Gg5Rrlatq4<=4s0x+GG0xd`Y5F5&dO ҂8yMJדo .-r/gU%Y}kӳ+=CGϸ<(s 3GBy&Sl{2!8VR(vp剖W0#ن~Yyݞxy>aa9&Ryqq4C;k o?rb tZmWʢӥ\T`ƲiV_!IOA\FB尡ijU#q V0coMcov感ZsVA~v^C%KN9Ed.[hw\Ѱ\Qnْ+ fa"|=vtdMh3Mʜkr^K JNKN|nsst%csV_թ|j<+V]{eVOo܃[]Ƅ w3'l#FO8K0(O2+:T%v "33pTY97 +%ܴmuy  "ː,2+&O@c,'D摦30iIʠ9w{u6 EG7+ V`d<Lg=/e0D we݄ tGR),Z$dU1L[Kv1-z1օ]ĤNRݕ~01'g|։[@W=&UWu#!r%[!oIX2^{ Q's2Xt֜D,::Xdl> d_zo ZGpӄSB[b؇ =6jTYc!kۀbw_f8~45M8=8[#Y{7b0c@I5rUӓٺR宗S*)!!(z56 I01lXL2,3Yl$&UTT̂(ܐdY)GT saNZbeAFUޕI͔`)0PmZ|rh=בz\;Ew{wyhg:ygNmsيwjyaw\+֣I2xcUbIXVA0TJ,IAUh\ӞKYrAN۔!1 7$\+II֌c]V qƁPʺz]pMQyyA;/N|.fq>&F+، l,d*F9ЀE}A)2 4 Vn&t2X=hEmJcQd$ɷcP LYƮF;Li֮jmQWk k`Pܡ75x9@f"1T.mS2ì18yȄ qKՙ,ES>rC:HN5Z_o5rևS_vÄE1vE#VC5e{xh+,y\sӘu搽Jm)eY D!8c>oZIX,K=RqєFBiȓJQE7sFUҋf)\]uVCu{x+#,#)EUI&vJx"p9uvN)DZ^/>^>CLf|VDhG lܫVn;"# rFf=g?>Q#7W/% 1M0ΎW{ʺtR6(WHd5!4YhdU^#u %rG%g>B wf2-O qwb%q d,BO1$\)Iesk^V\mjPzm/ѧQjzAiE sOh0mVw]~n>0|rmUMޣۮyE'VWv q&ҧQe FikdCm1-˔ dTW^֣ P 3ynjkz)|c86N=]A䐁OV:%䊐h.%-ҘT;mb B3j4[))%^Dj*-^(E (*G㪑sw4ttҥ.PD?n Ajiz_|iq܂r.jw;B7.>}(L|C?f $aVy`^ְG1s_*aĘm0V%zdwOhߑ9y{ePϹbx:[m<#k|e2' hY+99`LSyLP٢fųk?{si*tG?>0R-M&BӧzϮRB(8kA):(LOŲ1NbтUIQ Y)^wӖP"d.*a4.$Y&ri0 0IM$m39{֒.{'&}|6~\qۇme6Bm4Ɵ/X/ye!G\e=f˓”*fQ*de F'Z& XU u.x(.jIܣ$AG<6<묉ITlY< 1KCDūD̖{.C'yMAS gbx%tn :G԰Fh{ s! c%W̹ǟ>4ސaRL :k⹧p%"nCp%hI&5.cL!ୣVe97-1%;Wub:fwT=߅@n64-ȣ+Dh{ե{3B76-F4UKi2+zdG25ÝޛUX`Eֽ/j~*A{\+}>-n~j>s~+s^6qdP8e%)*N=r: gU/o'ho"酹83>?<;LBLklLky̰ q/iLa(Z:H1); UH(}K.WK#s: Sb̤ i'Z94h&S Q!fAR2Jnf)(s,Bzfv2zH 8NůȬ9d9&ޫ&8 t$@x-\r{L/ޔk&u+-IAJβ$霙,KZEW⛏0ۼ'fe⽾> -\'+∟w[ +(S35 j:L \.L ZRƖ*ӂR^Ce*_sp]1WV;֪7W\)LAv\!\U; jKn7W\iЯAV3WXv1%U\ {s͕됹*5댹*p銹BwUP^ Gse0 :dn3 Bw|WVֳ+Dij7W]7WzϡWSg#TJϧ2B7ԞVOAg-̟tra@Ggo4{;P_yUR@i<}vK-1X]LW_U> %23{3}[nj_w]հ&n2b'm|_ ;jR8m0&#wOjgn>Yo/Z`ko}(wL,ʰ)VNP i0{|ͣf54ot!43UySFY2(1{%[ >97;LQCˋ; +hj;+(;\ Va3sUJvsUP^Gs%钹*e;e+檠vsUPJޛwh@Y]2WX*sUΰ+DhUAzs͕HdytYBgUA{nz*(M/ߣRZ蒹*錹BҮ\7W\iC9+,쌹*p芹*hA\J+cX!s*p_Uld ZnЛwhQ\+,*pyg Zh}UAY?'7W7Wfϡ7WEnkvj>2+Kkv63\Z^ȃJvXC[*ff衷8QsN}!)(@LJv˷k%yc)7hNxO9|F-Yd8]>%` w2YN4/ÝyW/Fv"kI@՘2]mF_ZŸ?DB棠<3OBT<4gj,{󒟀Y&EY7Dd<Ӊ,TDYne",Aw\ bxm7W%zJX#ZpAAΘWꮘֈ+D));4W $UOatSWINn<)/V?$?/KI-"xyГ ?2x 3N&@D2ϿSI4trͮNh/OL1 8_һ~zhL>~|Q=ajw7ux5CN$4#@'gANAL+8 o)9kO_>>às[3|y6װD)X5S`[mQOۻ&m?\[&]fs<(a٘6Oop]\OwdڥhpAῙp#Xh9(#1VmeY{WTwja6 2x1ZȑJژTp2^ͷ@fyMZg{˲ϲr~fx >8/Ođ ac{dZ1d1jk]GKɏ4}"lQ'v|U|޿㴪;?An64-24UZ aߐKZߵ K,]&۫NvT,JYU% $ٺۺEOwW>'{\+}>G--nω}6fDs*6~yҬmbGk9D9DTJl20Tx+>E "mp&61[OϠ}D ;|%̅$v~KS OV3|ث5jfHey.r4k]HoVwcz O;WYࣱ&' QQE QJtLyf/U-xF'hbBWSaND{>ݦbe}<np0re;B,Q")^ N ~rpmX`-m;‰O燇q+_5l6Ы5 zLBL z6|qZf؄4&0MtXqcR,wӫ@='D=F'ܲwbdr.R2B'aQR#0 R{/E*0% Il !]֥H^gmLe <Ƙr4Gzn=&mm8|->Lo;-rsod@+塞{s!꣌7L}p;E—`l@ZrM9 /%KpЛr8X55Ӥ.\R5)Z)CUYb53E4`)SRXkȱw35fz9ހis=!nYmlͻcޅdT2ho&jl#70N sZ Rr  F/EL+۶PȣzLGa9V&GfrK!&*yH3)83.D#B9YmJ+B~yk#@pKK^Ѽ=CGW[r%9sZI%[׶o.Epgةg>q".p!q}<(Bd΀`{:g(/O9f'ke>:^ K%r&nL>51Mw{H4O_Pd)/:l|J>[ɖSF?@I'N*sJ]YA + *%%TAdW(^2YtlrpCMGw┹NmvY&5,c4@vo&=ɒ4S+M,Ľ\EU"N `fY'C܀6`J> 1/XqؠU J_:kglC/cwU"H 3AR_t+)ەcYy嫴4$5:% qMu9\QUa2hA*jF ]s6(hEL})3ʭ.]j(}ͣ53)F&StXTY fShIfS:AZ!rSПe!1&M"k:cJ^vLg%tyȌKIk^?-mCG˼ 49*H1Ze's = tI3-㿅FsD$cJ{$\p{,9 +H[ϭ0VgVvRo'-]+P 8 OQX$0$.MN-@8#ɍW(㎌c{Ѣbu'.u뽎+1Bt^*6b:;:BriA MōdI/IR1gRˣz1ݡG`;`U2FOw:Cz[C`xi!#ЫRbpZۄX2=<#2VPGgrWSNгi% QnR͛,/^jtc\H6 {8i>cIZVqcG(FBv>K.*eYe;M m-D^RYZ8E*dr'[!Ys,2wռb=m#Èk&xD[Ac\2RaDD$:q"7$4 ӃEMK B|%Mj6[(RY2F"H](*ssX#T #lB 7=^ mL;&鼊Y,qIG=e!Eu*rn gccmIEV1mT ?_2*@(Ǹ5:i吔KHw}dCR<$D&.'J-1F#r#e9ӎʈJB| }B-/)\ DP5NJυ=n>g8[dϕ'rjf 7y / 9LOi߾,^_͟W.S ޏWLO2u0dNشܛR{ܑ9PSLҥ;e{GHe܃OtFld}5v!(­}ȼ Ff!sZa8WVUi_͋^\? amq^ܦ ~ubfOh3h2?HV| å*U㍲ 34͋>ߚ7fެ*gW1\%0'i8ãJ\G!Yx-GcFRGRzH'v00h\(ZEGBOcO&הX;`׏NrӨ9Q?Rք>?'t}6%#`ܤŚNu‰j:׃8<@߽-?߿>ǏwC.᛿9|5I 0f@ v?=2yjhZ:ЪY|qMS01KXø5Iy0Q~8y{ ZClrD`9Q+6Isy8y5T?\rIY# nj&rQ.w!Z.q,aF?wC)[t'r4,m=mM  f"ă0KdJqhyޮ]Y }`5%%u`vLbe ۜS]vX\Ĝv:9٩p+ti%ri^0=M ?P R;𼣨&9)[VR)ijUh@%'j;dv.ۡ:OTAS 0fTY9o V$K?r<'BMAm5ȲO8J[JyHt i=Q?cGZ"LwuJ_k^|Gc3w ڃU`d<X@m*I[,D$y9(9!QLHE]UAzJJSELRhCR%Bd"DJ8\lCfvP/h.D2 "8aWΕ#t01' Lu;ծ+foQ\ypR**h%}ѪT U>v=vhmXR*b*0Z"_bZRi@ML<H8L<e@/ 2E3ml\to@glQrnP)N k9W{C7O7\}ϓj}ޮ8±׎H;".캯>8VXccG:PYdr@T*+$Nr =LO5U۹_{-,-/0ŐW1 8F:#"faÐ2P:Qv[T(d2FIJgYrBz\x#@ 9UwwΖIXNƟkK]-}z}m0%|,w[ub[$~-6}:'y+ysX1kVSǺ"ڹrK``M0E=qtUYH X,$JP'JBQkD8ml(E!CglB );;[ҏSDQw{yB!|k&5a>U9 Nrƣ1'kTMVc/GK!Bu `KTPʨO7g<}H=zzq,_!nBJB(cV[%NeTGab D:=uqՐY8s|TOl%(kd`1VL}J$栝DQE; !jf?P2tPRfaY[ّ/*)jrɕ_ `̹`?D: O- 츎| PU5&5P6f3<9}3vDˆY )q_o"GKxTDnfoZJ6,שg/-dmd\{2\0dv9QS ޖeS&zp4(5'[YnQ fKJ`F $eFtNIh.pug\]"&Xpo_|^+˃90ZF#6m˱2p={,˟]ܹ]f?}6?a8`b\2%Mr5yYɻXwZP 1me竫@1W#7E=~Mc&0x|W?s!K8&r9-|JL?oݴFKy%LA!@475NFf"dq] PᨽNanFojXV{/I" iE1ZS}>`ŭq[7ʊ!_s{fꙙ\+9Iݿ^4㱛R2B;e͍*JD0$xH;zO)jFI{X:qOw+"H/"c&9剛1yu ibmHсvh0r_Et23N^Jn)LOļNw; wWl昕-p5T.F_'7&e4&uX5X u0x<h^z^?_hr@&^BOYңd<9>8 XlWqڦA^N~%9Έ.j Ye@XtMgȾx\},=/҃(CnΛ^m#<ߚE^,w-4r>Rtc1_G_ՊƠͯ9o;@QAs LߙMFӆo~qx*ΚŃ#nm6ga7Wcx IC8D)ZSiNkRy ;ڮ'EpeEnC?ڂeI+W=(-LQCοSOE!NbUA +#)R猢.Z^etSNRc#6nR}% }&uuG<56M~Ś|1 rʝf?͕s?ymŻˏbêI'ƓOAT'4l9|HsK j-r\4#`=FG(D ͥ(?no?\\Ifvge/ w~J+5z؃B5T_'&kkMI!!RYC֍)\ ˴g}f/8 tۡ*l/Ko$<a^OEgT@v81q0m.opei36h_rG ދto@ ;x|.=B5GyM~~|v(.QL4rIkbeSs') w}Z'5.c6ExY:UKvωwDBYge4#JT>Js݁B=]h"QDଵ`,CyGdBw=&o ˞įlBwu=/珡 /b^_:y SAïJ~^re ^4Qth=OID76Sxν<)%;UG8e+1)Em)zuDQ:GÎL}Z .dHч.e"Rr, | ]\Az`s09[Ud #YȠloc{9=R";9]O87玙9""&b$'&: (q(WJ_+R!f I J%e1ơ%\-e9]>էlNcGSkCřc*&o%˓GO)迩߆45 "_Ry0 LR"jJ",V 2r DR)]3 %TBVy~T$ 'G2dK!orҠ["_RCq09q8_e0f{a`~ 30c?}{x毝eۇ+qA/?oqb P)с5̜d'c-koiY/X}:z5ݏg?l1(>_ߦg_WSRܢ~v,n #ݚ3WSzſH?_e*y?jj|ӓ@"%Q9ܐe[ŽzkTj߷G{?_ UoWV,kL_F[Zs}f'6z~wi?e.w^ߟ^/дN;xu;ϺP=m^}\n~?{I}ZP@`3!"ɱu)š9#"?L J7*VpjPcUUrJRϽ@aP\R4*63jJh&\!8$E fpr4 BUL:F\))_bmpmWU5*1z着 W/+]W VN5+ UUK8v\Uz1HlW,Xv*WC+j+Vi`Z |v[vL3b\UʎWU2#5, &hWUlf쪪գUUioW$kz'HV+v?#NɞQHBM;v+Ń(xBt7NvƎjavTInT&=Nޫ靤Ŧϻ147x(]_kr_=0j-;TaX1wJFP_k}y \>B4WD' y;SP8KHdԲԲ۲:䪑]2 b]I:guZ*;$(r `'q Y.UHQک~Kx<.!*5+,U+`Z=z\UkM:\IP]`)T3r[UUK0v\UL:B\լ4U.5O}:ɻ:J\)MVpU \[u}qUUpu<"4vU[ X WhƎrZ}q2@pł \ٌwU՚/* #!p!\` WAV+aUU9G+gij VપudZlPwa(= SfO Osݜjr[zCQSs@*Iҷ4L޻TΎ.Jʹdv2HµvNzdv|(ېǂQfܿ*Q+U '?VX0(jWUપuVY%,t@?4n7JfpUiWU㪪41⊜V6+ Nֹ^+VBWU% WG+,iW,\U\Un'\!@-U;+-vpU{WUWV(*Yv[UUqUUiq% +ʥf2Tf J7nx",~}73;=xäQ+pw+A F;odP3S)rQ;ۡ"JNrqw_.vP봸V:.+#4g{3 %W@f\aCwù¬V+\Uj;G # "X0{E CmM WG+6*XfpUnWUrcUUiĄc' qxX.fpj轫MQQIǓ_*Vkg+|7xv~^K$ZSE`M|pӫOKNmy~Fk.+Z\Uղ_nRw_^Mp쒾???Ƈ|Wxa+>_]n~syƲh&huhK$ -W>|pwV>|c5]zu}nV/v{|٢43 mK_Ǘx੺Ě=V,GTGArsGl4~Vܣnρ|R>ҘBhϟg﹠댼`i{/ܺf~2^uЉXv}?o=*}\HDeKme "dRCԪhAGO֚z煾oOnփ>>S{ߜ~r[S/^-PEʿ%xB8(:9ʶúɈ༒ P8GAJZsuq| YBQJge>1I,/L@ň}v`7U6(X+{?CE\Dbd !o<"r\fO\!Q-9 -!՚}lh \Ή[I1e@.Fr, | IT&ިJ$jRmR)Ԧ( %,*dP2'!2BOiI6wwfnƂ}̢v\72D,%(RٺBdt} h`c&G{/VS崡Ȑ tp5cC#M]]"$dk!#P0{6E~DֆW)S)֔d4Y(Q\yX+  uQN8|suc_da0*%-QaXV.xmΒ7'D2>kF P{U#r(1s1n"Q=$nmQh͐+G :}<TNggIf8?\׈)QMj SC:D?R dPH "l*Q9C2$'m 4AQ(-duH(YPZ i z/gZ{:_!~hyA )@/KyBdѱ$Ǿ](شE΄;lS"Y{sfx(.խ` :`=+ )D!; D{j%x!,;v4#H2P4@ZE lE'X3rVb{UPQAm> h-͡]Ǐ:mV(k(QLnByz]ė ڢY cMm̭.tt%o{eR\Z:Yec]Q?s%B`T^# L=u#)7fa=gAbUhU%`}5 eo LI , RlG?5mB7BJJJ#L2hveBC[w d> A2&d]g(M DBΦVRȠuJ+j+1w\ 7CA a ȸA+Z|k('X !@YP6{iFn3%!2֜An9 q{`&b1S|bDJpPgSO vrP`I. pqT AP{ TtgB@Q"Ł.0͑㶂EUPf=kyGD)J6NҿjA!NMFAꕶebEJ]VkD5ڽWQR@}JuV 1r 9 VDlZ"5ȲU"ZJ( 6A j"b C$CuH*мGwU+czh2&m236~" kBwv݋q)tǬ䤡bLbb󢐴t8!bEfCغ3񃩜]]\֜ҫSm-V]Z01ս.0AABZI|txḰKy@xc#T~:]t4CҕjIJ2Ba1%OpHv9ok=/W(39՜ HDNW2^i C.nk{P&>yt_V 5ȤuՅ:@vdm~1^,TGW>/AXżۆjbPb$Sa)F<;#{. O&!z*4ʘ}t6A@F4= R=^ /ѡB-10 zv5ԘVnPJ5K q[O κ$!H;V@@]BZx _36D[v&$˽e;VE̓ (D td!."(^Ib&#Et4$?<(Bi!7qVUp*Zl?gѝ4MUJPD[vTjj5]ZzVE*,e4{-w$a:J` ئQ}AoAKCj)M.o1ڹ{y^6O].N,|Φcڋy[s$Y2U P]!lJ''f= ]Z`#\;-,F ݚg)DzI(ΓDCkBm1&%Ѱ-*3bO*Vj7LJȀ-ɡmM1WdsC<ܠ=[us"NJdt  J`Q Q%fdi SRC|Eշ߬GŰ%6vł*'7;"($'Mm2X4knk!g0m&כDy!Q 2=Pʢ#6:XLnzށ U l?yĎsZjJ^-ooԿ V//M V @%Zr$0%1./ 7. ٸtƥ? ^CWa? ]?qPJtutB DW؍CWk0j?w"N0] ]Pj"ލBW6ƹPz"]Exhuq"qePLWӕcGN֏LWuqj7]R8/;Еf졧5wCo7K) u?/Pπ=Y|v7fդB֋oY^^^7087 OT8Q8XTjӟ{tuyzjX"׼|{/1) by ~{t!;byܼ֞Fwh)lL/y=a.^ü+o7o.ޝ/^Xy~33gFz}{d{3䙝4}2E)hQٷp_8|_^o[?lOTd[8q>$N3@E'Wdri2)0LcHqnKp}ErPJDzeJtn*5 ])څ͝2LWHWZ[@ta9 ]jtE(g:@2$"q\QWK?{8]tutei8]`o+ _5<s+Biv1idRtSLOtz(cst`[7$$Qnh$$, R*ta"^BW6̞)pJ{HtލCW7 sM{;]sBW@teXi9 ]j":̝bzgpP:1"ΝN<`*t傘d;}npC&NWs1xt僶~bg rt{1]]\+0tQS ZNWgu?pn6s"z eЕcax43>#ҝH{r`ڡQ `%}G^oZH쉨݁-l,ߜyR3kg4diYm94pZ'y8G<#^$da fi-Ղ@WsMvR6)\i\BO!FuKҢU)Z36Z("[в;{8< n|p7n(M!;8=Vō}dF[ 2'ZHulJ+ᷚvhwQq\KjxRk9Bfk'xN?9a*0!L+['t*$^Uo 7v}%eyqώ3˄NONq^ y}+ g%(>Yk7u.M/˲"4"[`:;{޲_}G\?jt[_='}J4s/Lg|p|7piH x^A?Pu\flW;x .Tۛ'jzcń8ŋ;i2C?{֍쿊^אEnڽ-m  >Yr%9 ~GKd+-'J;:7!G蘝(*jJl*'ٛIr<-1?|$gyJ<] 'N4% |4WiIM %v`A"u8@lLݛLHʢN]LH;R1"hQ"bg&*k 0bZ^CX>{ D:乍?Jlҭo&̙Ufӧl77W4n ?P?K (݉?<QBzA뭕`/fKYoZ}_oTJ<~뭠ԏ;oup޾MznF%eR6u@*3LFQ;T%Dor^fnMZ㸠OGl!oK ~>mA,3}&kePU'qnRhpCT1Q)<Z,UcD.Q6j,es`{w w@߽b_s&\tH$(`+ P$L$ΫցRv*RQt— };.A`OQ*YmrL٠F V):#E^ޣ )61=;_WƳ6ص۷JlEffWA>^E4 :ǫ,J~nfTS4RFrV*4QĤ%:1n|-}xޏ.Bl,*%HVV>;I$-K }:HngIAΎ $ ,OYUY%B֗(.Z [TpJYT΂(ATzNcd*t @(,i#Bgm$%WYg(g\Ob{5m0zw]Dr;TuETUBH%R2xMfC憃t)+3? P>)B5N{ RRdqEez@i;@ΠV~OZ G}+(l($F U,K3H'IE =tcAqSqLhzq@5jSۓZؔ_9ꈿmO;;oރ x3]Λ 6)/$N2YAi^`;0mc_~ѓ}X:>7CSXBAC"TA[ >c쿥*!Egd 1+! k9I,)XQoT%s H\ߤp?k6%} #xYE|\Vc"h}lOmBcb][GbHCU&<:QIٟ ,ѧ EK7Ca6~ӻNؤ餏;PSBBmRwhdBgqpy4<;o4VI>BHBP $@GɵV%'Ἥ|D`}BgmzkRbz:V^'u,-$vl}1E].&JFďzkZl֨Q m?--k+@Jx0۟_\,?uf6]S] ˋ0,gˊmiDK?QWX:˭ q ŅۚO/2qkHXݒ.vG D0糋%YiW@(R݁Zoߡbu *C6"SG:o߶ެ>w0akIQ+E_vEXw_1+.[dM*EN' ;{O|!9YyJ]36Cӳ0Itgi oQV`Ql=Q7wȺ)yVݻcul&d!EQh(!hlKao(3ŏ nVJ4$Pa-C%z~HDD 3 ٙ@YBǠ3r} |4=<( +=`P590EmוTHA]+(z:8He݄ II Xr6L0S!C;lU1^K'c@T8JluxplBnl" jL*$zK"RN:,!/)b2`tG{J*-W  &%!Km 'ΐ ,M1e u| c{$ccDA־⃲w m@[(`!L@0Z/~Bc\l|jy7-DCZi"DvP8񀎞 )eAu+igrQRR+aW*=i K ]ðYlgkR#wiŒ肹 .b2r'y (p{8qf!?Wx Ph/D68%%EJb`=3"ms3)g n+J-j咖MT)+-m ZNUWm."|n|g@2!5lH?]Fi\{,cЏ=Vㅗ5}h]O];(L݈zN]tRqf3XƢUN;}?hGA1|=!"_SgeƜ99 ">H.K_TmN$cv7Vw7S};E|wpˠg3~x Vb:|] )DtodKZ'JA&ImP%AtxFi>y4̓&A*@T .?{׶FdSbŀ؝loyAKM$eY̿odoe%Uʌ<'2#s (ȜoTSdB? 99^ZĜREQDaoH H-!TG}Qy'4 -6}G .J3K@<1#,J8<'7+|n-2%LrZ88*$2iB\){{ S3T7~N"C| 0篨]T¨^nė{ZzMSAGxSB7ݯiD^M\6]θ][Dag1fnU5 f'݊mj:yל ܥ#Y2L,bqOo.\`mF{Gfd6G^}SSZ)&KA7Dz9ыtޟs |/ryK|}s[PN:ytkZE~% Щ4}z8=%t(̝ ndbܟd{3LGIm/,#~6w}ht0IPqcȍ/~-N(~ȼe3ЛnN?N7izNs3'[ަ[f0&Lr TYx`㚫Igzti٥=lRr0gTs~wz<7nb%2u-7Q K< ,lDElDzӝTv)LxL$U"[yJ[5%8"ie1eZ+8֋Wތ2)MkS *x`h"+r 9)b} LO׍_ } ]- JRqxO $ ThVҝ/",ݕg݅BLFV8Ho R4$X&;n%=`g>J}74|>rM'-tՕ|no X_7:t-z0M<ޢbW귱[?߽?ozݻj0l[uROǟ$a8]ᑚM_&^>q4߄@h,1w΋ſ ?|՘]}_oP 1HDq"S+v$$ '!D 5oDw4Ɔ)_o=v~WP Yo+y&~c28qtksaHGi0m9q2 |w ]F81MsA&G8B>GH8C= A+W#r))\.}VIPDRBHZcLH,xit4<ƈٍC7+rϖq6owz ح;|T<ETynSa^ZڀQ'׍ 4&i=Gw:=5΃l9 UXe '@%7$h&DG#YŲіFdѤ=c5ͱt㷰8~mt4 y]p3'k(z\ڙӬbO 2E8Z㬥+̃oѳ7)>Ô0~_>0-z%HRDW1P| QIZ6u yULL .%4 eV1:ÁHT@oC38.48|;p9\Z6&Z:ظEw^5 (SISY 4gd0IF0o)LCe1Ke;Q\8VgMO&0QU t:v1sv8X6ƴ!8bW tm t]k3g ILa/LhaF$BGt!`-4Fz݈e㵵K G*<ⴹTԺ u%$:VRU ŕ`\&(&edH3K(FXW7-9;[^_]eϋ7YK;߿8RIb<0b#JD ɈaL^Ff45A:FH(K/b sտg-);|6x>QBNt~u.yK])} w3 g] 4SFR!N *S=SǪ|}|'T^?zHqe6vy"GcEy:hg;K:HűqY[GrT'p@x2dV:.yR𠢷X$/;x:>ڇнηJa'~Ay`M_Lz6׼g4f(-5X7T UjT햪R[vKn-UjTg[ndCܽrݫBݰRBCbU !T1,fhcXCܲ!T1*PeCbU !T1*P sWj"U 2*PBCbU !T1*P^U^Jtlb|s>X;\5سa|FW 6sj-b`BfְS(u#ͿuZ ixzr& Dh,@{Ђ_[1 B2*Z驎7}v}h?ϗAcwnJ,Lsvrt"~l}^ػAo)gw*A|~M{7\ ׿϶k t[~T-xf]Z'ۊ i ,sS`rgs|)SZeO]v*[\N}7SR&G [.'yB7nFd y 3##$γ$U ^T'1B֩ѓd c2! 7Y;JB)Kx%Bje|0Tq$TRRƹt[YLzśRDRBi$PO1&$L+Mx*T̜^R6>o3آ0xq;%ۡKY? VLLMeIx8GM!R6Jؗ@hTFb/mS7ر^|h#\VyY ("NJx4$g{8W!2Rqn +2/cpg8$%%$ڞ tMO=~U]]%sDorW|dMzc'%1>52} ϗm53A=Ҫǵ%=2&PhV9W^b(IJd*';rd |H N!2g@Z0LF掣~Cj('}6>W:Rčg_(bEtݫ ǫτW~>OUCPR ZH_I@W)) "*Dɐ ]Ƞ6ǃoljpGGMGNy܏6.B.deHfRUV,9T .!yٔ~S9AZ!##wtβL^8$Qds!ZgL@ˎ3rvZ>nZ鯺QS.")y;LcsUZq29X zR>~-?W;g +nGHƔJH-F!Xr@+H[ϭ0P3gI }hp4otJGT<: @F ]̥i(gz$^qSqLqvq@1j]ۓĕ ;.n}qSՍL:nC۶=ߎw1ׁK RhZ/n&LzLl9ct_3𹎅mXx( !Z1Z-Mҳţ;XE?}8;-A/ŰF8ܮTE}=Yln=D8idI61q:O6iJNQ⒋JY+ϐWZ:ɍq'tF#&\MUCQ:||6q<{0JFR::SI$t61[!M4٧ pTtr@%-S?8pA(wB%<'ϒwEΎu|7On,Wd|0⚉4$-V>c) ȐH8Z'2N#HB:]X4P۲ Ҥ(Aa9*۪hu\"tΒ3J@F"VKqwǚJz9)xv9NgbBU̢fgK2l>Je,}SRQ"J[mgga(TiAR)DL@\*_2*@,c\]({AV4ȌxHLZN2ZRFFr/*#2+ C)@®} w~^=p?:k􏕘 kbij-Š#9`G]3Nty +8LUDa5tiѧLj~Am^'O2u4NشTTܑ;PSLHqpNx)˃j~rCd  &hu\c\.iA>{8:9.|Y!bu6iQ9ԪdJڟϫu˗ØK!Σg\izwDOb@h3l2/V| %F٣74K~u6w7^_>/Yg71\%1gi8ʧÓJ\ٕ@1觋Y]-GcfRgRzLgv40h\(ZŒGBOsφm/gY>kf3uZb+!ebMxˣ/D״XPx⿚ GA v߽-}\7duD+0XJm"Ap ݧVivj.S.zk| 4C; Z G.>X^۠OtZ5 Aj&1?kC8DMרܯ(xjAl -qoqW2-HWn;Jp>ai"g&XNFw0A&\"QzCj~sɱYbPRVN$F] 9;E.Д㛘RN';{O|%9󕮽t\Za*('igL@P?~W*Bc,w 4*+UJ1]tPr~ۧnplMAJe\L%6F 6cVW[Pjp` tTH3_YJE)I^nKdFG:#gV'Qc|ҏv[fn^n"#ݎg5ϻ[[c )m_9qY.d"+dT~,O:\P~EgtL_dZ!] Ɠˇ̣_H/bX#W'\_~aԶF]RI䢦0IY#)Zj{wN|5zV/1PW5 оC2i: 6mrw_E^,w "lK sm|Ko_R7 m'h?T{w'ߙ]LFF o~T\4'vY&{=GÃqr|1Yz>S's`/2}}Q:cmiG~gC[WP>)!avOu")|-/5}F:3&~GcN֨ZHp_vC`]~5]diIA0$0QE+nx2zճQk<6Y BQƠK>A2'8S:'Nq>⥞$JP,b@#?*EQT vDm!F{L0/|֫b? Fԝ@Qמ0N ,";K]("AM7}j<@l2>%`j%EeB!%s*0{j^v1$s{%[iDUO,ƒ1sr +WJyXN]4ʙA y˵֚2ijzQs'arr򏣦:--Lğ#xyK[aN:RkqYS 9wH/aydvuNM ŋp*d7 L,w_^Er-V1X;vY~{(٧BSX~\U7|G&0'7,|Ha4&&';mMG.LԺ Gܙw Q8Ʌ.}Qʵv 귞n^2#Oܱ(SphnT9A 7A& ^3ғ!,ad} ٻ6$ X`N^6${^ղβRN}gHzPȖDZc$<Ϛꮮڭh!>hJKN͟v.vB3fKHئ */eڡyA$ez*W40vrWӍr_(}N6JP֯7+ԥnS>Tn rP%*w K9nJJ42)|0JuS^xu/¬$n3UeLX AJ I86 :y +b¢qֻPB*teJJ2(LgH:eY$߬8>X?TM? "'H xf-m Kz3G|H3o㝱$qk+/yg w'ҽb/y:7y"Ktu&NR}>{[3KHM /&ZzP Ry(P:JnI,@#Ð%@d/cȟc1vちA eWRk.>JXPCVdR: yY!K@#smXM9"!t{ w-nGP"|nꞶ%c3/{ǰ?N{@jScOD1+SwV`x+bҶ#k-d ACg:bwXF<"F# 0"NHKd6hJ\@hbL3#c.8)(.C>b ,%˜C TfY3𭐽C4TH v1} |B YHO 1K'`1,YIu)fA00 %巄^DQ2+EjsѤFs8>]Iv%vl;v$vlZQ!ߤ}&<ζ;vOs^.r$+kT1$ɬ)P%!cY`R}.8WFlTEFlN)4%=]tڦ A$嬼d!ZL Us72*հflWB{,23fbe?٭lϳA~4 ?9<9}͸N $UT1D́4.J K)3e<&!*խ;#j,ΞO6Ѩ`T1Y(@&G,ZW{8w#v%sWPvlڢ.j >`ox0\(e 3*6) Zsu0kc} NsE21CD 5 Y4#a!eX$8wac 0 "VӏMQUFD9  i|r5;Yg%WlN%X/˚1WvXק#|cJ%˲$A n%R;'-d*UeD&o:.NYvil=​jMrB_$4}пf{Z3(gj5<ꇯ|=Uj"WUAt6 ]Ee:f76[P!!*قpѿRjѿR}]fH4<пC^L*NHi)=gQ)A__Ɠ.l]\NY3p?/~8=!=:l :jtn<,^_Y޻+CQOX샕(SjuYsgs@/;l15>djt1O 0mI`.y( PgoPE{=괥ힷέuW1ic0.v=9FzV:+J_d)Ңk}d:"?{08@Ef%C;~0YvjFY/ϫg`Z؈jE\iuV=G&. \q!\iUҩ!\)ca{W8n+ް+VLI9•li?us_]>惟LZNQIu=W*|z ^6*ԂˡEJJd#ϋ;&6 lo>o,p!iq증d/Wyh6 u`,{=J(( K&"LS6Rh}yƳ {ͷi{= s }4m{ݲڎsn6- ǿ|nU:&v9__$7V=7z!"0oZ,ŠVztҷAd׆("!{Uz+0s՟2*Sf3ZQc.d62+,Jg3RuY"1 T@f*(OrEyѪFs*?U;_3,U=-)wgt[KI/}\.:(NC7Tv[ŃSi )H%bւG [HY⯒&.bEVg b 9$%]m<9/iϿ1,*-eiR֠V۪hu\yE,EΒ3J 2N kBv{FJ⃱L=&q^,jVF 1,Fg!Eu*rn|6%W|PTb1$1yr'Ў~˨,zR 7*!1<Ic !m2iI$PN2ZF#rFr 7} .oϺ㰠??Ҝ sބO+~"ԟ+s"%ac3N"fʞrQ1ef]tftx__2}x+d];rGmRlj_8 ) cX+ӭ)NʓlGu*܌g{_yYx2΂EbfdJ̅nY#;(F'E;{znzcЪ_Ūff*ffYЇ^Q0iJu.mv.:uleoVWU"} I s`ay t\Y1iRRU0`a8XEe[~;,4o6o:F^4UlY|vE]^N0YYK${3u ׇW6h6k4cfz>'@6WWQRU.|`*py^6H˵6;-IiT9W.l[Rix`,(3xn23ŃGFϻᏡå[6phmb߹[lfwN˻4/p-DCd`˝fHKFIa\:L&C!fH'jp6HF78 ;^sel,q+97# ia|ZeI uzTs݆e\ӏg"]ϸAk5"{; `kflu8HBQC"3 W£Kq omrѶ`hkW`>6} Tj6)AX|gc>.6 l,L#d(nIp4P0E=!Z>P%xU\L r`!3Y+냉KM-at* pHnFΆǡ5. Ei]|t;v9l QSn#qΦ6u5M+~/-m2F`\Y ɱDTs!N (N^#܋FH{ϣJ)NhKBD0d)Ot)PqGKp@ yIJHJ(*%%3bi1Q ƑCklB;y-MeHޝo& [^[;juk:ջ\џe\+1X%m)J Z\ K#{ӎG~zV4Ҁ^mbuK.ӠX4Ɨg)R"h {ӊIpݸnِLq>$XRäl o3b)f KuNĽԷ&;hCB_ <\yNEׄkEc*A8;;eHqx7GEQykxuXg36D(\ Fr/C~p.;M@$vΦמ{.|Dw1^}Pjb(?ɞ^7:Z67qK8gq^̓dӝ/^IuY%sh)\iJicRdyx+JNPτez4G 9]۝&B.XYFg7--@8ۣ,g51t,f2p,-;&\>Y#wZ6pC&">g-Wώ?U$`93b#pO w'`*ճ rqRu:RI)g;UȆ:Gm T.`TSNQqTC+Ȟ:pI_'1*ũ1 > H76xY0l暖n(:!'=3{R ͣ7fJJTc 1_x0!f׿&dGYצY$n2Eװ|_i^=<]<>:%:UջI,U3 ct_ʿtMng`RS6SrzY(OqXL %dIVsӓx >_dcL5-\ 45&vM߱3FkN ]/q?AIɵV\w^$e1iJ_30ELB=.-ϟ[_;79_z0`عo]IJ5GW P]fi A5&UTl횫5 5n[N.s" /~ᮻ5i_xBPjAmD`*+䨏(%8;h6P-,h@Pm0KQ heSحUB~ՖB8ZC n_ b&I6Mf(uZp WASn4"^鵢xԥƝ. ?xE\jgL4 c1>keiNu:^`v=naW[6S9GǏ RǏ“8Gw -nB)K<~3p*` j.=| yPXqf:̵s(:rm)Xa!O:5I;rRTAp 8z)wB1RQsE֬]Ϥ x$ )[X HL"^ˈiDkt:eD:[#g@֪{Gr8ôq06QЊ8ECFB fH uH"$IKm -, Q~|0a!rh˽)msayi$J7\QG}Ԏ鍆WerURi*'Ger`i7f)(k*! kY|UԪV3N-e#OhT(2K}T mCL0fV()hU3 %ZE@H1rK%bdF/ĜJm97kJk5θ.$e]MɌ Y^;MW/\_>UtON156;h$/ s9)uQcJIQʷ{$U4B!{C3JڤB:`^e) Ӂi]IVG[ܬmgL]ڭq]6iWk{a #qX눈G)kR~F*,z]0r!c2 CDٻ6%W1nFCn&/ޠղbI\o I_5%Jp\#\@bLPG$# HA;Wg?lH[vGW( {D{#nxs)a:ǀJJ3L棿%edLv8;~8x:iIWX8_Fk}c0IhM1eo[ y~j;cU|0IjH㌎$zgJ{oB0"rT J $c)pF$!hٳL0cg`ZȄo.-L%ʬbu5%b sbGޅb4vyHtO3ZMchvU-kyg{x&?罉IS9x8q]gd0IF0o)LCe1Ke;Pep 8qnYwny8x{eksuyLQ!sǸŤ-x9.:%3>k:ϒvכyK]*+2><?] 0US{ja>PZK B IATTJ_efWRudhcʻ7v]n(LK0mS]Z׹@bǏO,˲zmN!:DDƫxLy&E yMwC zړp ?څНrn=y :9=wyfy@bfG-'p_?'3gFzw >S;ĕLcwV=k *IJM E#"%o<tč5ϒ8sn_ =:iS4?s?Ǚ$BB.^϶ 9pj|]z~|U3cVOW~ig/ jw&Gq|O7?U-rE r\6Vϣ<2Q5CoCg/=5~ "P`{%MZ8jbZK10!tr r_+,I"ͿՅ\V/ ,kcX4i GcBM܃ ڨ<QJOu&[,e\3_Q>jrAs)Cu>̐5V_à}EIs\UNn ~v˺o`u&O Lsݎq8çQL@M~-l㶥DD oN=1| bhvoHkwv169 jq>%2D-1!w$Os'ۭ>U[VaZ8ɹ ,Dr DHLD*4.|lk<_Bӱ&mlx=Fͼ!ƆcnsgL"t"Փ`}h$"N9EG<9v$֧:-@_5PjNLLk0=zR N7ѯ5icXfr0J"B8K{E锨G#,Jrk)LenjXaZv~N4w}k罧5bCYٚv3| )=2l0q8hJh8 u Aۘlsэ} X}x1QC !^;:٧j 㳆|9=4 1`2S59SwP+<ޭemq-q߈';pxc8,:0@o U93@υyW죦׏{Ӆ{y;vAs_GS9 n Er'Yq7"^1.Wuߌ8ZK-bwqG}n288~l77ppVהv:;UٝTp:n>-{dzOU8gnN89fTI%|zG)2^aB8=ˁr΍ic'oe:|L<էb2`:Yj \m#$E?F<;=N=PN!;pELT D  1+8QB4RlJ3.> 'dSV2QT"U)qv8`NlE=)/of1'xo9ڳ { .O>κD09$aL}d8fQ,BX?X,Q%U 4!KIm c j^\;Drm+ +^.^[/>xxujN;y5>]JKůۆ 56jIsjI0'2dFpz~v*R %rez28O ;7H5/"eg7 bw4-ozVh'clϿlfkޱGlny/1+8f-1uۤY o/.uR }jb\oGS2[2ЮnJ<>_ O׻;Ҳ UQf5­Y-yǘ*dJ9 hb1rCKA ތ\Jt5̲23]L1=,=춗:"z,f)5<#rW(0_Xp9twe,ừ,>wevzsL8Uj`8^ w9PQ~TƞQ>G79^PgE 7AWb;(q_!@ogGAdzoF"\ieo!Iejמ3W+Jƿ؇n%ɈkS{K!ڕiPF;D?.`x3|+/Y8$Q.jxHTҮ 7*͝U%nxaKcOy] e)Z8-,MD+KW)R?"iLB`)voEwl5 5u\o*lϡbH}Tߦ>s˨[)Zzp]1%ozަ-yF0"Dh8ior80)~T[I-Swh|8)7пϿ YDY命mjg#4goIÁ'ЇxS)F'Y[N@@GnmI%zǒ3*PO@xċG]/yCBɺ;Jy"e뼊}߈S~%fY3PRe,Z?֔ ڒ/_-quMх Nyii D\6RkjGuKNurN*QKP  1(#ɷ+DP!9¬eWd5P21C7D/(18 ,NH]rgW3Uug;lF}@0wW?Jǃg UX1okщq Ȉ 8 hBth*5TD[FT$@&oI_ 4h$qɍAeZ j1q*rCK?u;ں:܋, GlE%7J |hC)P&ˋ:W]\2 ur U$?rzm˓2gf?RW޾$Sz|/(PTs~B&lm q^vpF{WȃɞؼYCI$R2~Hc$vWw׽$Uhk M[l(~=Q5%Cu%E2__ZO)}cH謕@9w2ieYk__<~vl];`a nGtYG" S'iuIp]㞲LYᄧw8y~2FmYdgPЍ9su]S_dޗw3EoKik{YUAmB6?[XQpsqYD"+xae *d0imL=F]IklߖT+.*jPz4;KGzOaaG-y_T]Gik>G߽/ޛ~r6/EUxBeSrs ,&46-=y`ps-mP : Fnh?ҩSRiޫ> `Ғ\i=Y}SiPZUxBBn}N腂7E]K ׹x 4/(y3(UӄgpR{_̆;auN>xR(Z2NZonzz65g3Sl$L,_جsN ޝnO4;<-IPz7w/?QxbGrndl΃q+aӕ962p!ͥv @bBǘB ;drk ]&n2[(^zrSR^6@\YS1S)E\ hI%+,CQatD`!_cᢊޮzji;G>wެYî#߽S#/J3fO_7_-tG.[wvojΖ?{uwNCiqypt5S[n]f?Nls gOyW5i@9>ÍZݕD1HU)?Q-m$ Ȣ]KgwkR"0lTFl Nh!/49HUJvo%( A tY$fJ̅3 \8Dfq̵lΆP]]%}|us e׋5fWвb7; zS`U S<6r)& hR89cRpp\g])Q͍s1IOgrBnaDT9oRz~b ٫r\g9)&d]Y{,!Z,DFE&]/ $4EK U<ƔZkKho+u M2ٖ-:lrZL*2¶h!YGRP2(!*yd.m9Cpxζ|Vy/I\hoe{jw^K?ȞKKʲA0ɪFŠ҂1lEC۠Q05/ޝsfTJPOC"-;-̜[eni灒wm{4X6KmnftlWa6XLO󷯶K&pX)HJp6Y;T}VRz2_άV5ء(6b FYȣv68RRW6'LLH1;,< i)٧ &zbxrt>p ڽ1Q7N\v&̞L J3:Wズ /h_u.jTJJ!G/*'YUwY enel3ѳQ^YQsBȔ@$\:m.7 3:;]V !ix]dPRy\Kυ)i~YKZjDy55{FgNr{X%[J?Z-;gOffO۪"ܸliQߒ"T)xnNH?IU*W*0Wy].eÅ :mmumD[%$&v!3QF$*&霹VYFPqs`e~+o@fEIr\ #:*}``PqZmwj b|@>s/fN,Z|^ DMl14h=>6Jef.*1'}ZN-w~IUN}SeD$[\W׃sLnl@%=1|~e-miIpV>TXUW҇@*PPqǭ Ù4-_ ?438+.er,JK֬;_|&OgT%$}y m5xK U F%1(HQ`F2&d+ &.պ0 3>H B[1%.RY8%=<R'Cv u %T[9810<&60 i5L56NQ|?S/\[wU4&$ڂ2ƒ:)2`3bv"E:rh/ tu<;A$rZP5jt*9'Ng c']I }Em6tж^Π'w Tv Aa>*Oǹz7uR*W*|QZNVrջyNRHK7#/֌F*PJe @I6^V%,a.̦U [[DP3ӆwXnݻA! L\WiIv'Hg|Xn5>oe_ϱeHũQ*6VBiINMj^.kQaF?w%}s0=1hA^ $HF*p$0务cII[{gͩ Γ[bBt-]3Ik{Wr7xvҽ+|k9xEDűBdY\+et AP.ua'6c16&Q!TEJbPQ,5 ZD*cVei`SfR^.Z+&Pyn=d)2*~ GzwWVoH&6+E54i+X?Lfi5`͢6ggUbFo蔣5&+1ܕВ-rN$-q/:X %B(mePAt3Ŏ45cE+MPH\dj(\ L޵q+?=|? i7 aiFT-w}aY,c9].9$g7ZByZ<\7Zn8Ys5S$T`4{9(_A۰;%j๛h)\oicRfyt'JN* 缗z4'+9؝&\fyuf]ܴdW hgϟ,|,2pXv&\>GZȝ"Ԛݲz6DDշ|~{q󧲊T,K*,ʎ=ǵ]\1Tsd/azˤfer=5;})~m>>e)\WNj5-N8tٳhmbFI4N5)ҳ£IXrXBuOMs 'l8{-,㯔sOsS~7tSR0m 3F+w^e%jfJn, e)+H D5Yb\rzLQY`df ϫFhj3'7}YV7^akISPӔa`). x]dE[4_v(6(nsaѸqQV|u&)9?lo\_3@u{ʠNx**6QEYErՆr`Yŝy]9ц;Efk{GPqZPlrGc>h6P-,h`5`KFQwʡn_ѫ|}ѫ-p (N b&I6Mf(uZp WArqODSz(ā.{45)i' շ~Gv몽E9[506J cšCx$ 3Ԏu^`v̏AA yv24P/r8 ~;1.fp?r5Ob6@ ̩99<}e nPѥK93@gP P5<(A2^Ђ6R»$.`lxy6,?4Ƿnf $~_QȪʏ7ʘfYLAQ)ch {uVy**s`^y3j8H{)Wlmc9u!O}-ZAY T3JMnITukr9B{#(׊rs"%`'I!) 1)k rj1&N()ya`"ZA۠FH݁i T,Kδ?-rn3'GBZ׆7΁GݷENXrM-6QFCRj8.;JI*qBw&{UӒwrvıbD ȓgXPO c""#HDH { cP4T)/4+&-B@ *#+TT!H% M[R,2_ nIZiYH:SȀ Q 1.83 Kc)!Gk nDMӏx̰Sh 1 ep\0 X\6RcIS՚&n퓾`D$( Dh0Hc`䍀R8D@-FNgJQqU$BkLLj9DÂSwnvLx2Tu횴e@[i }~_<[-zX;SL=N8nkS9)u!\ ɵ'4[%v9ƀsl'gكb4b9-{d@ %@*ڀMH 9q!B4+M  :(+ǀC^? +`,z̝vX-"ro0Kp* ^زׯ5rnzyxw}r9j^ vvf#̧ܕXlJZ]tZ|yI}ak;Le.;DZWUwdlAvdlAvAvrAvdlAvdl:2 c;N:2 c;2M 6)†K+d\S=}džKT2a}pIVl\Vhϖ5QiksL1əD.(SEKx{xh;s]%RѰ^vqOj? 7WMFyXkf7 P\]EIרX/(0wK X.my-o#-wC $NRr'Q+@,majF".̧pAoB%a6J_})W0氱0-2-J<*RC1Q+S HCu;Օx+Γhy]Յ\?{Ƒl OEwU 0.6kl1 ~ʺD(F>D=(TS5S:}QB.b_1L~f㕢xD<4R$SԪ%$ld;d: ; ,`IZ&s)d˝SReeh6ϙH!cȃ@3'r-'E^_T`F8 $kEC\B%!rcJL@3TFڥ(xeWMoi^GpXP{NgctttmOYi E$/kp9IN[ zo)H)9F] ,@"Uh`0i X%Px#ƔAc)Ьٽg70ԫ* 9 BIyQ ,3"CV貎!Y)NSYK\8 :jyМӄVgi0XJGF:!X% ݁T"F.h}4NJ:F)5(P'\ES;eHó Ox`4e *d4jϡx6W8wtJv걛԰zt-74>xq^Ϗ+ξur##`euv4zwڬ!o;wp{@Yd\2Ħ~h$dh?&e+0 kS0U&NjC2yq5qqxdy;)uZ\ 4dzeeon&eo> ?GY#a="xwJvXG!qY#A6lI*e eMe,wU _08Q@%MDQ!!TE$B1gY:x,hm)GU`࠲4DYQ (Jc)4)$ss\mNqDe`&\h>.!5ʒ[yÛnjq{7E7KDg nR6|wMY4́GVc-KK>8X XMʹ:^vJTPdPASxFٛCFsCWMq!CkА(\lc0K']"hQy O. ˸?Zퟛz`NY8H>iw6򋩒e@EfQsR!ZC0jZ@)}+A &Gږݱ. 5,#ėjxQEMGm}0HFA46FX| Ef5FƅL[mz\JϽٸ{9Qeb27_}_(rO[Ѝr7slF5tť/'(ga|q?FϿ ML|XSyZ|b^{ӕmk^ 멃.wƳO)M?g?~C4][.oev^:d=xx~Y6%%niyI*p=#Bφiz)VZuy_n\Zj}~iGgХc3-X m"Ц& D Ka Bͫ/]_{e2t㏾j\]:KΞT=F١RrMPeMҴiGvOuPS6{njl4ʼn|` le5.$icRڿ_嫛.x4B""&! 1jh9HR#eKQ{[LgmueѻY_:V|yC;V9ZhaHX˜49r`Xt)d])B$[Ey3I6m .&EX%K~7cDV"uyԲ9 9?0R ,<) (96'XI(IN 5`^r.Na:I=N)"RE<*zHXF)ܣ1*biV`BvLX.@J.%@M)zťyOLW%.2B2Gw0Yg5qNV ›/c{.8baɢ֚3EdX҇,geԼ+Mt U  Ic\5l.EY9HpCjɵr@62Vg;2*հ ½#Z%f!h~ Y&|{o~Gg،l$a9aN ElhMt/);49bfe[bH^HQ`SFy"!mflR2 :j4]AjcWuQ{1 3 #.u2xJDdI3XUZsu0k.x\L̐(/g&&H!HEYGȨN\e<&v>@n6ll#9Wjz09)uH]),6lVTr^%$eΕa2moyS Zu{?7f4|5a=(Ni/nEZTǘ?]/ݶaGK+NIK% 33M.\/nFlXvg˴P!K;Vd:r yk;ijeURjH:"|-|=䥄" AUcqtcBEa% ^[!z[z]iڵԙz8BL\l>wN'6랴r-/X\k! yֈbM˭n]eu^P{ojގ +· d gaFðWK`uX84]*vO_Ѷ쓿v( &Ԫ`΂ )~Nj.0!͛j0oŇeo)"/EF `?v/jN)qל/itòy ,̝bҙVgt" 2&Mp%jۛ*t7Ԫs{O%XZDWt+8i ]ZOWR1 %)B"ǻJp9k ]|Y*Լ+FѬEt)UɶUB+H*wutʼn*V^JW 歡+m ]%X7J)ҕ:R50{~\ٚUBOW ҕJ `.hk*U+@+DC S+E V'vքZNW &2"]iFP+Lh{`[jOh? PR$:V芣횞s@}1nǥ=-;DQfŮ8ڞi;ګԕP{VG )_sbnq.֔We?"?0lVUepWՅs4cl2g/),v;V O`_&n|0 ѭc]͖mUV0#(6cFLKZj̻=lˮc;kHE>Ӓ)E 0"=KO8ղ-0e/Jy§  -+ bWZ͛NWWxttI*V5tpm4>ΘP2 Jʻ\FBW RN Nv,5i ]\[]qAM+@OGWCW("UKY[*tPVAGWCWRhA]`3G.ŭ '6B+%m +I_cں*-t tJ(J[y S1ydDoM錕mg`>"r PnEX۫ =FkGH_DwC;4hzUh1V0,dTn5MڭVna;V4wpe%5_B%u DqAh*5t Jhj:](uGW'HWT)$\'UBKq*dwut4fB0?G &G:NjGNVќ3mh4;ֆ8M E?-xqQ.噙 } f|G@TV!8ߚP1Pyp>"C#ģaVi 5g2f2ɴB02iAXklfrg|,qfIW~pm:̘$L_ί,oݧ H5~4W>5ُ)c, mq9Y6sFQ\.z.4O;]4t>P<߳ vJr;svt= KZM.|r`|5]O/hʟƯՓϤQd: N?7<L.j.4?+qnYk< R}VMXBOEPGAh"w2NnV(\{$Vk+7!jXH>4yb΂wDðqN\;qu]ϧ G«kRgu;*>%ȁhs ۧ3Ђo aꮻ/Ge9qqEjC )_BX*+X_'mdzrǵl-!^hAL 5u L| r4<_νHq 6^ 2~dt9s~ZBx%|iƾjn`҃xWd}HzР *sf[yI^ x*1g1ghW ak`3k$mϤᄖ Jƻ@  jjߋkk ᵡ?Y *8cR˚}.{aNFEw~y+D~:XOҒhCE^/E 5,JUZ[juMn,S+li-\M[t87V{L,c05%a4UTsc%bDdDHGD6%NsMV!:˘Մ= (WKM)PqZFe)A@0YqyVlJ6mgWH^B2i4P&|\^V6+±f#!\H#40Fi9%E!NHAU ;-Q1䄥 W)m@ $UH9HR;mTR];HYZ>u&uKaLwZEVq B U `6H0pGP:!: 4Gna^^)f i*Nka[”T& N7IDX@gpW`H,Y؃ywеY f $|MVCp˝^>-nס(ɛmFlIQ<\{IQ3]`H9*R[+鲆\/2͘2"=MWBws 3f/i!Fܠ#VXcWRvp'8EQދPZ YmX p '#QHYu2LwZ*"﵌FMԭj+5gN?{J-w`Q7d\b6j4]ښoa9]wYg_gb\ d]g=M(U3/&Bos%f{F[Xy vlţd\L*X-: )MsYemHk1\v?kNqWKI4$r9'`޿qܕ& 7#v6A3jeZ1oi#P(鑷wڽvov^RԌuDxШ"lCQ'mXK;R߇|p%D/]]-qMH9V[=%QL 5GwMw]*11B|$0JfcW`?⨴^t?NVfm+ClFik#U`aly B9d lN9d\R1_B%] ޠxx@C!sD2na207|p|-}]6Ĕ\oWg,V6nY6%ui r2yCEvS򢉌 IURX)bޕ)W4R R5*o1*7=2ȷN"^A[$kmJ{]!)tTL3eD 0dّ# E =ɤV"~ ~fK5 H?ϰsEdI!av鬾M\sAz_?&άwdގv˨/?.U% ԷɾQn+7mi}&Mi!4; UGu1wꠀ+$*(*g4jcby~~5N{/DtҢt%Tkl.Ae1&ř6F=Y9n$r@-F][Rk.X:l=MMV(E1{d5tߵu WoZIw~x5"nsGLjrs>{5OBU}>CXgTe6\=*\lhw+PZVYÔؙF_;p']&T:)Ai_洉FӁWU0_E}]k(Mzf>9xewRC I2)h jOM|8/}jZT'jّœO?FmoQ(Uf.+1U-rOk#gOHM}b :T 6U_d}տԴϽ `o嵞]*9JTe0Y+& F6)rP<@AǍ?/w5ʖ~&gAO_Xq=Dnt;>q?VE7=}'f<sFdE0:xm_ΛoJhvU.2*?_IXw?tzn;<J암Jۻû˯=8`]) 5{?Y2®\rtiI~[?0NƝ^,׼'ɻ׿܃wBѯ'}\]]_Crv@ǁ:ni#\_ T~uA 4456M $s:9X.:}.Q~Z`8, bt¸&2ǰE$1&I=O:ջˤ\K/V:W]bwnio$><1L{PthjS/j醻Kgpg{4/7kf6BAW`Lڌx#0ɴw_ՆfC<w-y0 MlZԶ.$g-x6i Ҟ 13)c S9QEƇ(7QҮv&}{nIQ3Jֵ hBxyP 60gk-SEL]~GR'f/.!6M͠ R7\ F];J',-k |Zp\;W֚7+>|9F.ٛ W=N }ӟԦpcBR}l7K5qJt2TQdWE H}}dh.Q\Pnv8VJ2@Xc2Z40g3 (J\ĀQ%h8km8}D=c^+PyNZVQIhgcܯG7c۫kKny/L@PL;yUZx]LQ4}rg>Z^>P6" ^Fmc9yJ2Ŭ53LB>9g.J2'CCKQ&7D-jHΤ zJ$C2>G*+Ly*u!y8xi x ;e0 4"k]vQet8YMWlýQK\&$)y;LcDrD*{8E('lhIIRʨ7ӟmH\qk4$ dLiNyE%HRs+r՘^k'vҜxĀ“x \j6nN46Z82r#ɍm[Ʊ%ثpb=y=ڛ(~ x1yi!A MōdI/fAj2LiDRJn#B]^`UbIhGB|ȶ&Yx!xi!!ЭS ,ymB,%cU0^ u4!Ro: wk"鑪~d(ҵ("QNf mg 6I:7y6 \s\b;7(n;7YtnENWl(^Ȑ⒋JYg+-8mnorz:i˭y&a"6)4@B0٧!buF Zy9~) .&ޒ<'ϒjڈn Ϲޜ-9dmY1\y_k&{iIo!rHZ9$"1։\:mDI (ɗp.TӚ+פ~΢4 5(2*(JwsKԅ2MpJ#@D“@OՙvLYqIȘރTҞQ&l[?CO%6!$hSL\*?_2*PB'JH゗&mNs=r%VȌI$(%XbFdFDr ٧# 橱f}yڗ1s| \8?^W3L'RSw~ xK:_婳*r S>*އE鶛ΆٝɓӍq,ӷ'z!; >TOAFqT GpG@"3ː`NK8hLl{ޛvaP[|yf1BMW ]8sy'?q~:oI'tgR?Wa^,VV_j^tth=T<>J!pʇ\qID^,To}Yӝ4TNPxu9>^xp%O~uO&++be$h. #> blLjLJ>ii];2c8먤Bꪐ>. }9";`~Mw j'~ Onw;<#tǻ'o^Ûɿ|ɫ7'\߾>9I H0AGp|S+4lj&S&zk5B>f!k sZ;?]~zOuGP*\͢Cr6dwUu?u|r&u`:čr*TGvj@cR9[[(:0ă$ӠnI|r1aA3 JxctQdQYhv8X޾%-(w>JO]ko[G+6w0d /3~TۂPRg}ODɼD$2"lX2y[]uTꪜ2I3lEe_UT=wľb`YsZtyUtOӝ*5_V3 %ГwRwYyS2UJnPECdK8PɹgJ [cߢW~3Rv:*b2"rbJWmk'YkZ*I,j(Q)'! .\S.$ja^ʘ$Fo"II_7q&}էÃyKS G0 ֚0UضA[nF=JI~o_RlDWK1ʈ!p*P !heΗ Wg<gx#6>[PooU66ט۞TK m]$.Kj5H.B.`:=kնtBp9EjM4!btl|0! :-)D[~7V]՛AgTZ};rQ~a0><v T.ֺ!z?h-̐!Ym7;峿ῧoVn>xh_Ѻ=|Vw{[lZhGj7As%Dօ`{>7]dr"8Yhq$ڷ|Wَ $F8wsD p-n2Lp: nOͼ^/2sI5F'zN5${L̛obx8aޝt~ G2@{` !g1PMH :s5_(aX:גXRAJ2l%sJ^Jۥ>hmQtߝIò5Uzᓧ\&J){LRjXS *l<ߙuv [DC$2Ꮫ!(uiBXqUdDDVRIP ኃ%7\ dQLPCfSo̔Y'(0ɔwy/qtf Z u2'kJ6/ݵA{|6<9 h|o3w٬5hoQyTKy.O޿e|FZUUh.HU:R='+(qMZl)&DKW+2Zj Ԕ7NHz8+5$Y"_% $cHFnܣM7 iƚX(3=>*^Ivvޘȓ'wV?|gVO!\[)de 9Eu΢`[UVJcʫ[s bmRJ6TeP\El'|RUd{?%݈NR-M;Em^`{;9Ym#f Ykx~w@yRŚ. iD`DYqkWXd+Yl[NGT)"#fcg<&x8 /s0n "vӏuQwFDG="~MIG.( Qp5BKղZoJe\To'tOB |DlF+:X gRI/[cIKLTgrĹGR'0m%댋f{\s%9fJɾVzl2D&UM ƷRVcdca/XCg<\#@X&N?G޹w8qF1vG>y_h>&y>TKtR2y?Oք]*~ٿ^ͻѴK޽5yT⣓&]O`jYSL\#p~ͤ<6/ 3L2+$kw$ꙝ qAk<7$@0i +$UfsnjOմ|sY]av٧Hdo4\.廓93g9 &d~f|:o ;k/ko~ro^O'+QI|A5ZzyϥP^ҳՃZ>3\=L\&mWR JjsKp2câɷKlr$tLNykZYɿՉl~il>3f 9R2oIC޷nf;TgONqI'_ IsKvjV' ת}{J%]bPJA`KZѕXKZV:_]Ѫ—|c KZn3TJav 7iv*ܤ\ڼSCٔ;WMv aWIUrIgprJJ\5v]5i5m;\5)-•&.&v nWJz럋4)>|pe(W\5R]&c;I^ \YIU z \5ivjRWNU띁+jJpդfW/$v wWVh+H  R=\\Ns2\ 7-m,388[$Vо[N))C3mc0P'heTm }Mv"P8Zd( d/&xcruV"S+tD bVoFS*2%…tuؘ׽[&#iHkyRZjHmOi{X)-֫~"瓣#uDj5&$u"t0D&aaґ!&Z  CXc2V)-A16`nTʲ֒f]ZeS>ԜbgkpFY->\FLAUHRJuaS9JKm!dRRa,>CCgOа*bջCuQ;1[_ ϵ Z vpǬ "5A(N?٧Ø搅w>f(1w될 (R4֧Cy %~\K W RcF`z +l@5!RI)ö48DѣžȒYy-2PKQeb[*WB&ˍQ1KBeTX :$J ( !xu]J 0I/S+2UPczm@ʞj[0ŮEE#8gVl)[R7R\W5`:ŀAWM<@cQ(W X"TU[Ƅ!(kW4Qun88:ۣ*aպVp z)J(Jh*i VdCZ ɴ aQ4֋d"`<AT<\= TUӔ-'L+|QzX2™ Kʃ@z՗+L+(ChfP57]s*Fa 2o"0RX,pv50LlG c5kshhuQ,d_VTd_@m F쭋S1F )Ǎ tיmJA1 $Ĺ?{Ƒe`u[~`IȇnA=c!%O<{.)QLlʢԶ͛D6n{nu.Ȫ32-([jkIk Nh8(l,@E@H v/UTy V"КeM-dPgmHK\AiŌUD1&"9ih>(!'`GeF:#.3(Z R|@{i Qk2!`(`G4P&>y4_V 5ȤuՅ@vdm^,:?jPJ4%wV4|Цe@TDn*BLv|3ڸ~]@,M$C.2The1h6A@F4= R} R{ȋQ /͢}CGP=n;H H jYf}Jh v% Aڱ", "+P(vw5$BjF,z,Z{65^ PФ td!.h]`mBgTHQMfWbd@BUq@ơ"2ΪU%CQeXYvnjpM"β] [%VmZbM{J!eќ4MUJPD[vTjj5]ZVE*,e4 %0lӨ{ŷ%e SLЍ![[&w܃g7th=n.Omù*dbU ԭGwH7ۭIFOB=V`&}ӿ{B("͢qնYSQt5(U/ yHY=yh4vM ƤYB_ ArT{R@s80(QC^":$5\Qr#bs"Q%\IbLvP%(H :Ш3 '2(E7n֣bb V$⤩MkurSp, wHN'TtF-LtB)* jf,(*Fb1; ;yT,+~x$mDVc-S{N낑٭E5jփ*M3| R5gҼLFALF{ja3BBvZuZ"Ϗ;z*DESnj-z5Z!@6(-qVp*-]QZش =W&Eiq#f0YJFD9jjJ’=iVA̯^#p \T"m֍\4*w. b!C1 PVRp* i7uc'4\- 5uhD;S`]pB{7Kh!}q7/W[[T~Gu8\`$J>t qqlˏ?n}|j0M~:D!zKVkiHt/:$6#Q{nF"q㌜@(95( Z N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:X'P@'n6N \@6ة;e h; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@IY9 ~6{rN @ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:`'99#pud(b'!:f'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vp@/tiW?񜖚U׷7 ]Z>z {DKl|KlK@r+fٸ( 3+Z(UYZcNWz/^N:wnn;Qh~>G%M(tO1{rՖ] BĮ;LA /=AjO~᫗G(,+~Tub+/;(̳~{f{}1CITrKN,Q!\_hYhk^͛#y}Z[p%CnvZ׾\㛿/ 6bl~_^<‚\!ogxӮ.Or |>ؿ7|Dy5ѷ܌L*7O(X;/D8Ӌ u3r57D71#N_:ISDqf{ʾrO Mw梓+2 94Ԑ  D뽘 lU~.Z1uG(gwO]c|j1Z=CM2]](3+l p ]ZNWY+U1ҘϽB (LWHW6~FtelfCWφVߺ$ )Xy;\DWĩھLW0ωx{5Z?}uE([auntQa.tEh Fh6՗BW]o^ )x瞻 |gp?CiʹAW]TPnU'4D_ѳK]wEnPO/i[\^D6Xz,=Y WgSKYׅFUo޻b?x n,9]55-5C A%"Nv;wg>v1h`RtCLU(cHJ兝&F nn.Rz1u)L(Cd)|RXIk>ÊT׫ ~tVH+-nFtG]~h^h"20] ]]X1]a.tEh"IWV gÌ g<]\BWvizFt*jBW6N~ PnO_2]]A8Er, ]\3"nPzC`Ws+S 8 eS+BUDb0"q6S@7y:]J뮾zJ'j^tO^P-d{ЕeztaIp'JH` ?jSߠ>ЀyYnoOj'NZq: Z #${ pkld/ڠe?e!^evFtVq.tEhm:]J+ OgDW:a3Hp ](NW@)a:@2`+3'uZUQW֪{ IW֩4wQg>nztE(`:@r^ ֹ`-lLO`iʓ7&Ά|5WW!U6J]`7;fІ$͉$ͅ S+BCW]^ g@ "VAF uv`*^~s?h?U^h|;TiM;Gw}TNf?kk)rtЬ.&nSf[ ճ>o;=m=tUp_ Jbw}-GR]'t6=o\"77Q7_p_?U&di]A?\ *(L~CƬMq~I?g;@R|s@y ou ~a=uW\'Lglu!z= GT볛yu /].r#z[W6oxO[~py6~#S>]#!ωC ;߯նm޷_oT.O1җ4:&#]Ͷ9բ"+ߴV栫L th`JWz*F68O߀6uzGU-]B|ej{{([J @zIYw[K9U!!:גrҘ&UFߥUS)|Vu e>Zy ý9mϟ\!Nn^Z3qV w/|/WXvqv7}vнKW79~.}V ǿYM씖J͔.N1Pߵ_kr}<}gm2 _ א ~9FFJJ*J؛Z}|ܧ 9PML%)SVMTtJ' ̖}wTKM YB1]u6z!aXAw+ڒ*41HVɶfE(#||_hj\\_!wCY\{\I P||q/Noma[+\#hw]TBI8 wժ&v潏 6?|$~{9SV~шWV9]p]伩*݃&CN9-N,"Kw2 sBZ*PO+kYj$ߪRZ:Hjlf Ǚq<2B%G\IpꚔ!ߤA^e:[nC:;[ty3 S}2Ŕ"*4[66NT Z)+]oU2{krVZmj3ٙU!veb3Һ(>أ8cy*1ڣEcY[7YY쎈N*#CK.&qd]* 9Cw2`ZVlu>4 !d+ypMm8T(RVQPT7|8᪨_m1NG253"3}^yHI+eOn=iTwtI= (@qM#~ڽ@K~X,6ًr#ꇤ"e,z)͗20,[3===3UTWW9(E8))ڠfA3ẗ 4FPRVՆ"_%ԀvqH&uLjR]4jbkWJ|r6Ib 6KcGqV{80s")kgo.Zұ=t͂^k0f\ 5>aQ})\3j oBG\Zя_/Cn5Hy/? FUC'pi,K=||ޛv4 qOL[A(ne 7Ci3dC06jTTRȹ$Ƶ!G9(8*@&jKdhF)NJ} s]nAd}՞~'4W1rPIp5ƨcx eLLcFfCҤL#+sl4G^bWF*g)^Wm8qheH0'pOp3Rc3g ?vdo}r7m]9 WT7@m)=纤M&Q3rYES9m@!8ZUc* Ee9|H[S[2$nsU£R@k6"z$rxde_弢eb&3˙<]+sҠ%w]b` $^~\"V!2DE8G|^:D喻B7=ĿʣsI?GZrPHyn"2e9J[nʹ$AV8 pbɓkNe;~44s8uY%2)Ԣ3#h5h\0ưUaaI 9{un58YjLN>Q.Z )ʃR )" : r%9qq>r ڽg 8I*@z\qTġ7r?dLǿ8k+Ied W ˩PGvEgZuNܭ&S sbr|V1e+MI(#rpʕ` 68E@!e*)Sb('X$x-+gp,GGcՆ!{YӣYٕĜ'Q?6JeA"/")YL*7tɂV:ɨL8Ֆ!29ʅv2XJ[yWUֲ6II<,EuZ͌eJf) *.w]m8_ &Ç^h嘔 KƸT\v$D'Ik*CɨA $B`AK8MBYZy;QiF|LZ10dDd5gD/N*P Nt th_W`)8)$B VB:T-4ߏ=(jF:p&(j{u1X]L n_?fYBH5qL "M68P^vm,AMq |,w!k.Bы R<Y6+L)IR "H& 2љDg@ 466 dd^-k ޠ繖pXj3o"^̓;tMǓ'YwaWxwf9}iMHK1Ylpp#ۍL40 ɘM8UxRz $!LǗ!iE.D620-I II;6Ij,;ҘhiYe>!QʹSEމ%Y.칋`TptfA^ZNqvMZ ylsAM,ӫj,\q0F@zAn{ڏրn5L]0)ǧ ū« .PE2ȧx8oZ)Bi|̤$\9.e`_IivA\mU*ڪŮ-3ĊO-:̰!Y=Q!]|/i@djDD !8dUw=tz128)Ya%LC4DɨT@eP"K˴H,$%sm*ZcYq۹Ė>'C̻ W}_\JAaX< @(ΠN~7ML8e\) eR9roe9I<#LdG2W`mЫr8=5ySp3QΦ T TG%.ʘPsfJfti%Fغ긙K+0--|NCm{}<+KYΓٚy3| Pn0cdt ~*ihѧ.6M'M7wZi{DYQrk@NykTHR!UMF02үaf_yoٛDey!!گ䔥A'iIJcS&0DU;=8p6O;^H߳>gzC/e)Yf][|x_;9d0dNp4:j_odX "0l"q6IޖuψI[v2/v?[uҝ;A4PҿRa[<:-(Bb\CR bf>$VIB]i&c[GVNGLۭ^Z ~҅?ϣ+-+Tۜ{irSgãRFF0W x0gZ:Q;h ,߈ o Obkz6chWКx.g\ JI7eE2{c,j9Kҷ2)p݇n'-yf8l9N^Qh7 wT̴sfPV1TLtBйjZ~He]KtvSQ80nP_j3kX?iY /0E'w'&I_7Q=E[UfNHlV*Ǖ 1k^"VI=e[m*3UN[6$/_BpQW ҟrFf QExpxOH1w? GލmڮGp\\7c|+g](BE1qn4؁ߔO5dsTۻ|[Ӄ嫌ʺ?p{󯿽6yn%nizIe ?ua6޹C7 /~z77raŬN+-4}|mJ8±?`"$4 B>\wG.h6|k?RkK4]:f]2zL5᧟?]H)~&S"MΥ4iq.4]Rʿ`?iΨT4a3%O)fԿ+`tǟe&aq8Q1z|~Sf7rS&̻PJ@ӳ޵q$/H~ 0Y{.q 2aҊR.[5|DCs ۂ8鯧 .arKYuÜݓrO{;!vPмw5n^='wV;~?T-X/|kJǠk.=g_:o]./^Jl>U_&J󟟪_[3c@z3%Yrgb\]_G i^#iQg ڨ{f;" 4ן"۰ZN hd35Ib,O/<%sO, I,2Պk %"h烐N"CL:ϕsܠA [PB|}P-p5瘴Wt f) ǘˆ ֧ Mr+QB[y%8C;os˸ 5t:)Չ_yWkY\ޥO}K&YO?6E :y 0cϊi&7.Yk"ےuqFژSQpq$-I gNJG%gR<%Z&}Ү`Hy6j!yE[eya/B;q"gn  1,0[ljdɖ58Y-W꒘laD%/ K@NeÜ吉3c+s?I >Zŗ )SSFga%m2R\AEg nP"GAZ*qT ,Of3uAX7G 3Y1!0fsQ68,h "ڗbi{ӻIfZ6$̪`R9yr Pd[2qJ^Cʨ1.8U=R]1AU4L;$(r"d)AbԢDE0Q33;!pRA O(y^/,}Lbk:x,͌ ˆw+ͨΓ5%_gџCs&v/'d}υذR[] J`zi0<QC'rA)Ԇ#nq;@f&8=R]324_ Q>w,{U Gr94\\MwBt^Qy~|;?\R݀M6nto o%TONo.TU4<>mQ&lAGB>~\Z|}zJ̻уn'Z~]yS_Wc0sOJwfW $8&?O^=ϓغ`}Knb]3b}3ʬmfu͔krYsAz_N/zzb|iX۪`nuu-[R|g^HHf .#=vUFd >G|t9 fz9a(tӯ?=}7ߞ_ߜ~)__}-:7p#. G H/;xq󦁷4hZZؤihI.i bZ F?$}7FJEq8O=~].IW6E[8ģpB`"bQN h- gk3~ .h$KR>T4[l(Q3x`LL)N^EC2 t23?쓾aAy]8Y+ %*0 I ?EzqRbmI5E-RsVu:ڜJ\_9齛G&'Ԫug?;HuRxyxlSIu@KrW1{rqwѝ&c;M"3҈I.!C1qVR2#f*-+}ҷ^]]]G]́ހC83^VA[$!J'Qp 6H9=VAI鎤xSRRuI`X41 N &NŠMɦ\爫b;ϞP?6Ua3:f敱!r[|$T Ck-š}q"'_TZ}I[UW]y]P,9p\rp$OW^ģ,TQ'1TA:$cyv68A֞,w 1h0>WeNel d4ލ^sh2]i@_xQ̶G|~O|c|qy}`txg*x{w|?nh{^?]]PrQN ڬ]ZDd(^Y >ZdfI|ܰZdŐ}͍x) И^K^o.xmQY͚E1:4$o4{gɢ@-tF'Oߏfng!)Yx]y;>ljzu}*p+km_ dLv] ])=+l ]\kBWru"=:TpzDWXH"1]+B2a]!`c"7+ByP:rL=+<]\BWv~3(5̆Cos30}o5{FhU3c']z '+MRwuSgF8Zqܨ6 f&*#NÌoft}y35<GUW܎8Hpq||pU'b3!y rEw-|lp̧x{gQ5+UDP:KLf;vq{2ϢgiWvh- J ۅ`f ]ivEhuPHԬu ?~E"BR]+Bs>Е0x"?tebЛ vE(I+]!` 7 Л ՝׮CopFطh3rQ=6C5ەm@Wv^+"N]c p{˅skIQi&ӛqelgVJyy[`#9tȕ,nCqR2%x *" $K3 h*;^4Ȋawrde`\a2TO9_Rf{._"#w{0Ŕ*Lp*h9]W %*| O;w ]\gBWV+u"zC+(GtEB\Bw^" tut =+llE+BePZ=ҕR(#B"Ǹ`+#ll }+D +B)@WHWFGtE{dj'7th=tE(tuteὲ]!`7tEp ]}hiRa3xt[UOWXu7#J@W_ ] Q%J ̾OH}4QYu! m1wmت}k9Z/f(eh5i7C$h^٥f=p7܆!4uvbFùy4 gʃmF(áG{Fp-}Q-ȮkXD ["VtE(A tut%m ]\BWv_9yZNށ]>SuA#8){CW){@y]+Bٵ󁮾])fä\mzCWְkWR !ҕvB>v7tpvEhoj'z]"]Ej7,{dj'V-pu" tuteV4Ѯ.ޘ Re4CRm^ӕ:alJd)`*1ܽVl1Nm զtw^k˥sQ=#mIQ|2:Y\vIάu̧QԊM}Tr펦#kƑ$ǿ"~2>p=6p=YQ'(QGrnm;rHQܕZPsbr @$gz2##qd7^DNT0re+Rv)l(WQ +)]riKtd3tepS ]'v2HWߊ(z!Di3tepBW62zt%B[RWq/ 7oVNW򨮞$]Q]f ]n͘A_Jr'HWǰ!`y5{My+tPqSº+2BWVtte( U⨎7DWxCW n ]mpWWrU[UtKS{ݎڴvCHW{=p>?:]8?2ѽtZNCWԕ-d=芎t6Ec1E'=/1G3Kԕmвw n Zܰ?|ҰpbnmвpJv-{[w/ƗNڼ=q`=8n?rŹNæbwWnitg`ERML[Z-`vԕQW6P޾{HWOB ܆?>вk+Cy4ObxCk17LZ+CHWORH. lʠ 16 Pzt*G6ue]\uWVV ǩ_]{POo/;]_J|Aҍ.{4; ?e`rYz?IaGbPD߃< [kv.b;{6kuw s<޷Esܐ$Ihp%nE. e$|B]p8X2ANW2#]=A†G ]ܼ 7zue(׶HW߄lQܒ`f ]\Њk+CIHWO4Ln;tepi3f P*SoXi;SVP#]=E z(2ۙZ՛AC;tM *g]CW7c mӕLHWOrNhCtej ](k+C‘~5tz~BfY@Av%KBaKwLí ujXuj<{Iߵ ݺ%&}C{7I O;\ǟT?A|?fH C+}'l#uۡ+[n7CWßxzUHWO~!)p#c5f}/gW|ooNO`_nj\>{_zux9M@],vWPӣn1 ~ӳV.K=p:~?޿F9C 7G5D7Wv*K{s=ΜgN)]}O/N$oszsO~Nkd q?jͳ#BH>7&T՛O܊WoGe7o/,WG|Dy~@,:7G_gc0?} BywPOd\΁\>~?Q^~{x /kKfi {QRY:rtT(4*,x+hgw~0Chǟ\8u4W]8A30Pg'op? 5agi]E8H U8e! uI5\:5f_KJ]5Jcv;6^cOƁdFCi=K&F*wBLwcpc3n7kwtb%\I!1P\ĵ7 F&A9N٩Z25"ZtmY v biN֫:"IBI鈒u9b{tɌaB4CcNm!o6?gPtBҤlb^ wD4!%߿ׯ퍾d͹FS9gD狧Z' ibHFV)>gĀ6m]#-1D!8~;z݁F4,$/iiMv)7et@d+%s-R[1yIMb!"jݨS凌1*@)i:FM@r3S-QKQ]WO9[&ݒDBe4 reߣ|.\YBa-ǬBm`5 j:^!ITu>H>!zv]"v9hrkX5x7].9er칑C`k "J_ ]Yr4=JGut2|>P]z0CȻQ/+ޅ *ؤ) VbEmqxN = ࠵ K.XVIdaDH]j֕;5HmatyejS {Ah#Z]DIփ).t  Scݣџזd^DF@H+`Fl@5}>rCEh%}z2:8:Z [[) v[Mx \="U8i@&>'084؄ #!G)ԞPꇣB+oBe:wHp_PL'ćU4V44FL!TMY2)DwA;k7Xl3fS2Cc,&yDL!` }L}@Mg⅓v8?i͓|ş*zC#l;&1LLGT^ZE3Ta:۬dIFHVB0us2Y @ &5 *o2؇Ȓ10V2X T5@&* ꖛu< oĝ 1%puJrj@{7[##Vw@ޚ&e>C%N7{_/K$dhLQYU_}6 ][{ #G+zuIԬm 90(T.%@_141R~V:k ) 1 j `AVۖc)p]L;v@ NBiB.m(-(sAKqVQx<3ؤ td!l2XT9(I@"䚳Z. py:Z'xw؛G$A}ytPR銃}8C ى;KSXgo4=l=ipž@ pf𦩰6R]&]znj<BZfA;q&` `/q6NLV% CGW+m%e!{|;X]9=]xsnpn0gRԞd ]' |p#,9A~'E?$o Q1b^8Iv{E #$KQfFrC^ @:ݤu8f7Wx hN:<1ܐɲR+ uЪU>+|K]3x;8kܳ usPiS6|Z›vU>4zw'aвpв +BnwBSQ+=>\2'AGqe(tqHzg…pylԔ44fK/1NMl`jCu.fjk8C,S,(z+@B;\~A+B#Rp`j?>^/n}\[.ֆ~;LmmdB˶Pn .Лndɕ4n~!N$[8]33VɎ^0[ݤקyBR9m8{@,ӪK%lUwKC%%{䞄K \x[@'`&'8ROߧ8ǓȔ3JcC _OqDSSݫYQ?kem{1(=v q,ˋ\$"rRS)leRQgL8]Qm#ֻIXh-φ`ǟ,ZsB~gr:;)=]~6./:֐q4wfIܧIR4JP^ T/5VtgkGTp@6joT/׸  JAFT2\ = uBsų2^|?=!:6rOKH1=mcz/RbޭyUgvjsO-l#7}~D~qc$2w'VC3U{!Jm)V;XM9]BIB3oE;]!ʵǢ]]igs+,s3tp9]+D+DWHW9kDW)BFuNWAr+CtUw5{DC+%~]!`ӝ+kyW :t(":@^[n;DWtg ZB;]Jh1ЕpUo*^K=q$mdW3A榜Tl=Y n AZi.Zeq^ZBoapV9,T3 o (øo߽^2 f`45Mè,ü芳6A,GZ"٥xZ*,6¸Ɣk|>SDbXu]P)ҋEc^k+'XǜR|(r:G>~3OM{7JA,ճj"D̽*Ήǧ@oA|!#^+K)\or*'׆B){.& oX|W|:xଓJZ&^jPXQ(|סs.;DWXբ+thDWHW &y s:CW)=+DҕbZy!Ftg1p ]!Z+DWHWH]!`;CW*EWVqt(2 S{WXDW(2.t(r :CWW{w( :@rJޥv|ҋ䝡+kdg "ZA@Mtu8t3}wΝAD+~Qj1!g)n=xVp{4m*HJöt+Mtu硷LTL\}{iSݭ-_ӉYets;Ĝۡ4{d];f^޽O9ĪEr^yaڢ("{ޥvXfCaMۮ$W m3zSX9-q!ןӊEG{ႮzĮ,6BCe\:-x}ׯa͑٠l>/VB sN!/D"JE^nK^|cĸd\$ YphqjHQ`Ӟ\/z99gb__cwl%T7w@tמ`Y A6efS/Rp>hom)(ܳ ]%KuфXńJ>n22g+y "$*(B2Kse%+@)/ӻqߩ)MςЦlaqjW<d˪0.I|\BXZ*VYkAqiϖFa6CO֯ 懦-,ڜa UMWgLӫ/PM>Hߍ3ֽnVxyϖ|<2tٴ+FÜdͿ>_WkJS.n)O}war<Ty|sA8). z3/SXж*$ YC0s6(..zsw湋G|{L~v Q53)JX+R2J㫂EU)L%y%s&h䥬ӎ۝Y`UVHa<`ȉ;e a, 41J;\RΘ"Iܱ Ά~fnnS /m,};r'*od!ĪHQV:&P8X.]ݷbsހm(h.}I$cJ{%\A` qR'íV(Y!htvl<=_^ýq9Q=ΗWߙ= V;wB†@`q;mp"ɂh}yTD;!igMQ`}=H0˴c}d__׻b:x_ Ӽ%6ey ?o2MWNy]@Griau&(Llsx.(8*Kw#f0}q]6p"/XPBro>8U[VF {[ļWcw 8ؙ;&lk b0 $o.,#3p՚=;.?f7me7^-^͟{XD(({6'acM&g4!m-r702iT޲*,{逥,ga cUfŒgcΦXP=.Mz`?{`>lws]MKGu֤ܔ Otr^rH6QYd:LsLwbd/K.reY=K*ʦ{ź9aV%\3ƛ2-V}􎕕3HB .ƓuRƉBR{*_u8l GxMB=,LWҔQXZVQAVE[YUP0;+H!2ƠpO(*z2Y8&t#&z٤)1[y2ƔeQPUdkKP2i> kZS&"seS`ȥbC;\MLB*U&H:9iǩHuČE)Dш*# ˪ FTĬ ^X!=?~WҞō^g\,ßף,֪ ϊ`X Š7l (gƿ7JTI.ɾ?Eo"t4>>L͆řfץ5]g dMjҨ6pg2%-"EN i:/i 4xKC-:RP`[UTļ)̺t3\c\-/Cd.,=A;Jхqf;2s"t]J=xi^^Eq}~q~(7`y0|i/nrT(zLOhXgd~\qSHBn}]iꬽcMK{l~zpvJQ5=+g6HF?ϐGOhٵ4z&u{MJ\mՈ۫jMaFㆣq&O5}ȍ Z[yniIZxV|14Bثp'xbmSWmaQ{:30ǿ狓O~)~䇧/Np'OO?:EzA~wת'?o^u]T -NmReIfMǬ&fA;˙ٻ6neW~h/1_ }8m4$gƶ\INŒ-YLr@ػ\,y 93lHʗ. Ҁ._ζ]r5gU/`Mj~:O}UR}M*Sh!؍l㨷$< oIb^SQ'bN%"S$Ld^Fw0I\" t 3;Ev}\][RTRGVBeeƹ;E]e24%jsVשsPGΣwӹag\wuPI\-4 \3'm#F^HXQ̭} g<ɸ&:g!.!' 8ˀT<T>M'ROnw/fsȌ]h[_s iFrsRL7]hQZ9Vr~hwW?uVf|.o"홭W6wgEwںj-[A>HKP2 ͘2|tykJ%Q\7}<tT/,#Z%Df+2Y%$3u,ȶvEjl6H䦆*SIњKz鳉EQ$#rFh.-"n!ê^>ݫ}QҬ{5TfOs!v5wc3nscQmhY {g;>rP )&nTД# X@I>|UvvǓ%Y_dj{]ST{j!zY]3m]C<6ϋCNȸ` NGI$HFh^,L y@FUyP"J0FIJgYrC`Td.A Tp$BWrdCj·?/I鐸;!38|pV`D|<+w~=Ķs?_jq-}\dYHYɥSل+ Fsm%ςSN(<;D "|x7uCIkmuVPABU|@D(|<9>58HQUYD"X 83 J_kФ,һml$E$*C5qXB !=4V݈|z]R;num-vP۫)ھ^"L_pG[=g:3>rјժa ek)+O fZB(H;&2$:< tгz:ؐǂ +&AH)ut ѩAI%s.Na򃩒E FsNsR-xS2jZ~.S5:-A $ږ]?n !+ ^xZhauAzexXI""L{}ld.V% lwIS[v_Mftщ\Fk sjt` 翏C^N{ύ<< _OͤfmUOo]R2LUw>4ӯiY"^,ra4lNb,oȾQ0VݾQ!p~޿}S I&kW% Di:+jsm\ۻx9QaG} EcȞc)v㑡AdM\HC>`qiױYqU6J$X{\ʢn42 %>$x?sW)ǘ7x#Oa=7GG1bj%NE1O/N&ֱrѨxZut"jt\딶)}dcw\':?4)Νd딒+"Uģ j4<: $RyI90Ὢ2iY82B 1UEff.3c%,":e~[!hd>GRzi+R ʆ4CDޔ\6 H1,"ȘD8xf`I =rL1&rC}B5/xBIIZ:G5DkK%fE6A y症;_m;wl\p6myyϠ^M-ݫhz0s\)oa,SCKU26lE109`i,hm9`4$JkfF0ψrOsO9XUr֠FAskmX.6#wuW a lAUb,2IvzxP"%3p3_ߪJVSzc XI,(/c RW0)MBK&dg;@_'a,],^ 8p,BI),u tTPgi,1XfGLJҋۇv4  ZO\Do{0?҇nhv{xw6]s]zJ<4y>yrxry`ĊO++Urb:{{[pQ8c*`YjH㌎,z?d\ݻK+o<W4 B21SFvc8r/R(L@QfzcrYDmr[uF$0@ef[9Pϼ lkJi~{3quaUڦ!VjyOǿh>;ƘI.e qJҬ3/a?6dU {kYGe"zl*ZxgK~P:pI3'nrr5ahPڎGN |v5lv| ![S,~]1(3)K `r'$2[mnu6jJx9RJM^Wr8:(-G[ 춶\ER}P}섶.h!i-8"PȅIJD2:xSpڠuƵ*I֗4a3jw\s4RY"pf#J0(l0p+WZȍ:H כwú?K`>1~]39RL}8)!^63vtjɶGnk>tˠ]Ϥ]v,f=__t JTSyBx@%!!YA>RZr~ D#oSӤfn4#2~fuZds4 iuݪν=l9ה?Ѕ'iU_ڳީy*~m ʁ%1U"f=H #t^3.%F?mIzޝ;ރ_L"jAœI YٺY7J U15F+YiQ}!SF@4jY%:*bs85s?\dֺՑ6:>JO,eWi2Z}eL3BLuThTɵ_ke2y?~ Pu펣pۧp5|vCvL[\d;031-dƴ6gg7敻zI 'QK7FbeU`OX '9~OyѓRIc[<+( шPB˃?6;n#H  |l๐Yѡ>0:e%g)*N6.r&8;mmbfog;s$,K7zE_p :]fIf$fXBҜެ\/+)J T +)p%EFGn꺹*p苹"TUA)`^_I#sU[sEp5\n J zJkeD`l7Z֛mu-Y*(q0W\me|W.B_Ub=Whĭ{d۾;hT8o\^_0 1˙lBF \ w+*貄jW$3襒^-F/?h6],oT8 "l !pe46>Fr}% 8>.?8BL;*fZf䦷dnˬnM~ _IZbq~!3zcѿlE`s@N ʮmߋ?nA+葹"7/檠7We2cE)\+ꍹ*p-\Z:o Js Ze푹B]\d_ڿ{J;hZ+l?/檠5檠|JSe>AYU qbz0OG+W޸ Z,(^&*#sEQ]1XjusUPZ7ćMo.뀫( ͮyqht1G+3^ ;+F @-WBQGrB '{STIP!bj1OVpqrm%?0?+hD?}pQF-!x_exjU0;,G'r8w܍5 7-+Bm3h Uu1W1pD}-fJrUhn56VpE Y-6s {]|l< 't f= /hϕ8|sF+lϾ|\`}1W-ǡ8WhG~n[1WVΛ}0W\!gCU,MoU{c ZysUP\Fs%9ObPr\UADUA1*͕je^^n 1QUPg]yOzL˥U'vAO~raê]&jqWbElX~0gs*]/5C5~ƥhpA 5A 㨌>R:␵2IUoGϺvg_U4um wwo~^MT߽-}{` k=Tz tm?LLm7騵)%'?d-t#w |z xM|sxq54Uwx7ugG|f&ewwTz=t}qJ c }"Sv4-農u3X/FoS%V{7_/I w"_iŽ47#xjWΣ3ȷ=|6g52/mOF2vqPssd#F fKfQzؼrT/iVD^@+n t|۫<ځb=b;E;g2a@4Nj6dWbk9)0E:Jx:E lJd4$8$Itm:Dn#.tAXx,KIZz)Y!45Ƅ̽õLdۜFTܵB;hDc"=۔x4[ ^ͩg,zhnrS}_C9{94 eQz's꣘gt2Wl萄N;V}Mn&jiM BV0EDt6bL4sYDAf9nvL{7a*nRNp.mhfCc8:#e5i K^?Xrlg&.!1ƹʃ2. $9Ajg61G0ɪ 3ʡSVre_kpr}[|Y;*IXiIq]t1?e|+W~4. ͫwmif{h>o%m$M33Uav? \ }q_>a/(vܿJuyMʼnLZ֧i\Jd*UgS X"^zԦx'M&c]S6O²VR="*Ngr{3$@,$ (IDO:1KU̴A$ĕ+W& O[,'K8jGq\lh/;Qޙh~gB>#KPќRq5xrT5u=/Y_~+j㏮Ϭ}=U2)nQ܀^joq#WrYC4MkѤ8m` l=%Ւ7\lV쵭%gypf/d8LB\ ۜLv mH1s Fi5g%5`$gR=Eō"`qٳdjGՖCt,!q"'nxa 0iBY[l)٦>INLg#t>֛5Jv!K*i RvFOA$r@D6t$& \a;GtA&%+ɔ"*PC Bh4./u2aTT0tvRg'-7m+&H<":@$!KADCgŠ$=g\"uqܒqL||G5ZtSZ@0n=1[BtN*.b:[uDr (FiΤQ@`RAj2q%Rn#B%x_BOZCmlC|;JPi!-#-% r1%yG e žPGkr[p-S{NI~ZGԤTM} [ln  7Hb)6N`yj XMM ϖPl\T XKJn`t|踃]s1d b1Ye(0J#D&d] Ϡ[J' Lş+)\P-Z/w7& ѱJftuyz٣-˲#(G Lq&CD[^Bt\"c$ XEډhD1 t IIrIlϳ,I9(T +T$aD P(ss棽cFr $CDg#5bδbB$eA3ٹ ZScWs9Cd)v(_wPiBRY!"ҒKŐC[H6' %tThTL1.O:YiPJHČèL\N$ebFFx`9ӊʐHtBGl w_[S _Cdt`ֳ-ygNm|XN_qF#A)DEn?ûҨdL峺uٿ\u6^Lk^S2xY4QFLy!u--/]1;8#%'t)Ӄ4pUsoi Krpc`,yL!0i37^ЄQ}Dxx|T( bv^00}}TEo5+iff={<3\ a- ҈~́dg.,qNDH:;ZR:FMެyqMwӓ‹U*0+|2Y3 !Dp>8bjiblHjHJot0b0* cjE3XfHpp1rizuTuF3uRN95ϷP>ʽ >7f:5:g{?>!tW^滗GwG_>'4kRWvZ񶆦CKvZ̈́2i󕷌|R 6%& )~80~ӟ]oջee)3gRa#6n M5T^ [a`4,Ēq;5`6˲ &axK|$5JR>u;):u h B9xYNI٨2r/L21줇v8Y޾8YW5ٕ%x%gPHdtfC*ZblKZL9+봲WgzݝO>LK jU8;~ YY-[ ᙓ塒"UU, H$y2x`YQZY@rjKMx2eʐCtR*cC,4Z9eEֿ:8JRR4.cN#Jk9LhsO} FC:AlMts(+h  ].GeM5Uéa-pjpf, I?Wj(;q#@}B5 >@\+?@ZAWsst5L  ۺ17z?V^~³)-n*E(c *!VAFu$7w{^]UsW>'a͉jxmN?KG1iDacd>9hߌ Gϸr!7h)$F2#) SD9Ƙ5:v %(itV)Kn}Le'0 g*AqGR%}?!M>Ԏ[;jݭy-3|_Й~LϜ#UN'NAD,&TiΦ'c=[YOy>7 qR"2 I{e*#hP&xɬIRyZiGa$EQQ ljND4D'|&wTa\}MUhCq(M@}RԶe7H -Km0BM&.M?s6HFA4.Fd$jE洍 /:3v!fCKbU2d|ן%Cd s'W]Lg˽g|x:yN´{y:^e />eUe_) w<3o&%rH%H[<.m`|&~{NDjŢX~qifg]zk>W.huX ԉ/?5//.qO75GWǾ˿_ק@Eq]YRr/ub|YώI\BSScPwqp%Ϫf u.z:U|{=%ћ!F=-Öa$Ƙ\˲A_ ݳv%]"JX?.ըY26]&}'$cUfk2֫!K,e"2nͺ5t-Q=JzܺӲs᫫@WC8{~Kd[prj4M|Și"9)sp;mrA 5msÓ{53!Ahlk jljS>ߛ`|Ү8 RK-Դ17tFmnXdd&vRwQ|fq]ъц˦sI%\rg֨eaL mY1#Zk47(8B:e4#L:9^%YZje`_PB|~P-~'7*G܌H DMk3I,jL5)%; +qW [Í[%_?unEpyfZY-),.J%UR2V(c|^C~,RzR5 6(gAiHhH"# VurB%Gjw>|mLs /rߧ:I0Hؤ-_^7'igBf>cIԮHcu^oiѳ/wчeiW暀lI;lIm{eCiyIn? ZVVTBݐkE%Vv*#TFY{Qp_Tij+ˌTBYy)2 oةR]͐jDtEd,Q %Ӆ$l9Rޕ>ӝA;2gȌڢ |X0 eӇv^MĻ1.rt2aPl c(M/0 2OKm" #V>FW<~ؓf7g9^b, mg<} }qN)Y%&~xw҂#"g@ ƓVT8ɰT桳bhP>@]ۻxYpI$%q$B6cKK 0$ n7.ɗ _e2)bUϐ%E5=0LtUԋq'TIC^ks,YY 9$vU1T ) ]*!ʗ Y X6dQz/ 9kcj܏C77Ӹءwwg S'+B$ˁP4 Y]+E:I=N)9"JZ$8%\{+11m!2"@!;&ΤIɥUYo9=pcp҅Ef1\]KZ ]uVgwP~I !:]pby!Zul 3#,DC3L4d:̠`=dCRJxafCĹPm0(Y7}/dyۓS}u:}w}N^޵7=S?["?JU-&2J3+=$ (d,15>eusPJgg*GYrA'21ѥhsgR\Yr.sgUj//Dz2UAެw7ۋjO;4 ~7q瓏g/c3b rkD @,f0isϤIuI+3]:"]{!EqRkF`4`9̂_u~'B1⵫iǮ^[U`ݠn?Fv3%n?pg,$QPf *Đ $⥃!_ Z$a&H>p>fY Rsa5q-_ĺXM?vP##.4zSm ̉YK/9I4Nwl Pӣ}$nH6dY$(~GOM:indDLZdP#Vg7ړ_-i+ j_SMJ}1 pG@8s8GSReY)XK;vXg.R]vw7 ^pcFF;n7D?>S#Sя.!8O9p: b ! *JYhZ FgQ{c !(OT#wo$%BpGY6֒JScw^zsٚ@:MJb"VtcTDū`X 1€ʐ2O tp 2q 0e-ڛ0ťw)[f7˘ -5W>Q0^segsLՌfKXF9"ı)" X{UND>ܫn-ϷK tko[w˿6t=鲜K[?lNxM*-"y M4=4N!HOmx7-8; K &3|3Q.tYŔ-TF K}+o 6 < Y1TVzp@xgRc zpY֡8[7q䌠$8|fK{SHn*sa產ˣ. yy#}77*ӣ8A>mqܻ?NȨT*^Mĵ O?|(i%vy8)Jhi$h%Ld*#9Z~sC%xv{;fD_HEz["k}[g76 !/7 NǮۚrm[/۽Qw$8:rT7WlAƓѷ?0ԕ;8h!ԷD_#Ǝi.D-]L/OG},s@v) AmSS(7!MΚȢjx1Z)d1 >1%XBJ掠<)8 ܴ$#bɈh_L>̅*R(BI!%W }@gqWg퍲VG37~%_t_Lj_tк}#Gq4϶|69Orz\mK,@9ol7&q/o$z]9gRhooRd2P8oT#Xx8+My$Ml̞w9-.)xĨ!d#*MLyȱ;GҬ|V=tmd"Lc &|07iܶ)i9b4kzȭN4:~;|oQSd,[z姯Dx^7}:#P+DmH-Y3s)NJ8&<R Rݘ6Q-e|vc $'/k4gЧ*P1PjN{(44BR슐0I[lJ"/T@^,2g:ZHUj!O>}wmWNѢ9꺵wO^WS=al.rDhaHY#hdNFKB: BD`Uw޾۽ i0l([:銖}ս<tہ<LJS-~dzvBL?b7XZ@=`lR# J6k2 ݐ$H|4$:;IpkM|_8=K$[Z*Ņ]r~ofjk9S[H:8#)dlKHPv;] |UbP] y65i(ʏ4y# B'b%rh+ͼs;h"j}K5%3|!}Teg)߲t96MYXg(`ȣ h'z$+z13{N+>k&'LuZ*Q| `@%ĩs.C;9nAt}#TJՀdrRD$Lmcնf*% =H&tf9~^ȞR$2x`~z?4Ti]MQ0bpjb56al-Z**՚zrWC-NE4  P`o_$Ú#1h03Ũ;j7q6r2Rtq-}󣭍vmz=y<=JIz3H^_PҠ6s\C=1Rǯޱ.R_+*?j 6[*UG6W(RTA'FkeXAWd kΕHV W@Oa`1W$SPK䠃.UMW醅4H,43⌅';ڽ>6fceา=a_n4ߞ0ϗOJB6A Q0'1, c 6RZeCm)cߥU.]\X`ӆQp9̱oqXʥ*{8O#6]_{.TPvڦ/j=`i<m&5v$XɪFf刂:Ep6}xCf(H% 8bG(G\}V,Nu)uno\rŲHp2uK(AG[9'm\b[ΈMy$5Xg7-9cg\3.θW BX ȴ93TNޙdАIkߢ\TMq8<!vC8Oaw\d4U,T{*H_(xAOWýzce 5חUC 8= ҚqE!UHc94M;r/5DH7DH7DHJ5x@ixUsڔf e y0þd-_X!xY\eJY77.d>4޺d;=(l)Mpu(3:e P1JFQ\gK;uq$׬nJ| U 5Zyj/Lt{w8}q=o}]NsoTiq9{'T ^rUnJo9C|*?|2ﺫDZui;._?-˅c-(x抆g->QTΦJ%{g9dBN{v$y[N) Pz^*;Rو ߡQĽhLjӎmJNIbwBY+UjYŠ)\OxXWzkx6.,7K's 71d&~v=:^Xt1WCY_M&CI{D{(7)==w1qVZٮC!e!A& SL ǖXC!<~u`DǪj E2 ST!VF HL;Uz0+E#%@v YKxBID]3{b7q ݡO:k\gsz;h[v_}Їd)u.ˁ]eU4PieB)t 1_hLt5O19Ł,'tr) j]o-'!瘩B^rųLX*imgj1T<j]lC((Jgب2)e!2֓fڋj mIh˚D5R ZHNNcB%2ё <ӿ.eJ MDm{n 2׿gw /'xGcRE(I`kȐ1MƤS1V;㑲V:"4o\k61@>K&:"0Ttsou j] Y^ U֣8V3[',3%ʹpa&L FWlU@^%wHRߔU7ݖju c|kB& >.go&x[.:k7>ޜ/R.[ap&{38i @JV5su/I>vf7!3t,Y?^/6Z.}2[m#/o#]ҧV!oo]hv?HI1.6RBq kuTGik}WYO*qಫxj߻hW÷ 'Va*Hl۩;AցC^UȽ-t(p[6;s= grxZ>19Og}〛 >_O#jܫ=m`~Ct9s=}r=zON579И<}kH13G^(#(l$p+C8gSʙ݀ZX} bVϑ97d?=d? ĵ1sx&6!c)DJSp8 o ЕѮTgQ 'Y\|Z m!Z|n{Bi-W2o2?ˌ˘˛7-yDA׫}7}Վ&]k(E.gwNѷnmh;v09AK _ܬ-t)ϥ,wy{dc!V R4P"3*X3hbK<*̅~(w?8F4Ɩ6 -{6cmDi|$JT:آxX[L%=Q.Cu58')֤(?&l{8OДZx\Tk-a)ys@F̞<<'N-OWs{=4mgBGdGemۺ:Wypޣj-|CѮ j|m(OpV^}-0S*EPz@ls u"6] "B5a>ao5'0n{NzF-J)eT"[v?u ?SzF9ןꍠ+8~\-pVWd.FK\jpWlWOjSRqj)G=% RR*IAA[:EyE)cҀ6hHrKk}saM`qtєgd Cd )|&VKQ%[;Gs@[\PϨ}7׽yw$,]4qvM Y; #-=$0d> Ua٦ =Q|y / ./hG w㎴w~;o= z}hHU$ `Ԑ9kh)Dk3CƒgD8 "4]^ 300$"1ƊU%29@e.ND80y͐ ) {,b]AGWm`!mSj1Z8 X2bw#i/[SӉ{guxe 2NE+&ou}ކ۱4۰CxRnZ!էsihmk>=3Rs n4+'N!?̴:d:Ta;uxb6гzfِ'WM6icmcpUB+d]|ky]O|QJ|g/łM.[MwJ\ڇr\up&#L#`3bunV?b2XR6{(A#˿0d7M}0쌍u#r0&P5Erx$UwIxՆaYUUqP[)EM~ɓ7!} &jt xQeٸ$x7xU>.4WabSG5({јN cl|$NI 2Qoo_-~UVZlskʑsaulF/߾Rz}5=0NgL叿Rb}`/}/Gs?]m"0Y?\=^lzTiݏ~ ܬ^_g~|Gx2]]{_3ˏ&xx2~.kxW`u>ӣZQfZ\hq'x IX4!` lXy}6ٍA;>Fѯ'}?)*.NC%F=8zXmMfGwEZ+x7-]<$9%t~'$&YˏszFqr^d%bdfr[j [ICtu?TzJNb*OW\)AZ^>/N u)e lsj0i4hv Ubbmu.K=bKWk|cFP*TtZ^|kJË@Ν7]8ֹ{~gk&fh4 %|6*sÅ6|bJ>>0L-QW}sux 3_xj UU,Fe6(Vwmf[dôDV 冥]l;ڏV, &L6IjWTr <BP%\!G}4F)`IBKTvF\k,AQδ ]P|{P 6Gk&"HCPc&I6Mf(uZp WASZh4"xaz ]<ϧaVW~kÍ+6| 5FŸH{y%5WpQk}Ӟԅ?R8;~u LZIiFv $KVYMO<+}*5IM^<{Oޘ*~Q(YQC%n1bo2|3OӒYiOLQeZ앪 1'5˓RLt|Kl7;rPrj&$:B@܂6>;nGj]Ѣc?4Sb7Zd3 ̍9a\@$)$e#$f!%#rcM5&AĚ =3#iJJpp^.V@i6(҄HpH̲vMl ̉P5W оA,n[),Mᵲthz!lw^ wb&&~K(!) B^)鐣X T#VNkQnc¼ QpGg9vRP(z Nㅎ9A"ROi0bOP\f  DX1i Q\1 A*!lh05ΆtVտDfx72[\9݁) +&$A"0 g p`!jA3EgQsA0@oF8Zc0`CGM Wz>/ Kl1PV1 2qYʃRcIS՘fԝjmG2!@4G)wDB<i'j1rR)nǎUN7P |Z4JC ‚q8QM<QtaN)#GwbA6,S46,Vo$+E4s6gX+8$ "8uIzc;RL]L.{^z nE-6`M@ZX.*KX;+byg pV^wT$30Yب'U}EGG˲գ\ e&`+LLsM&_s@R*ʌYb)mՒ JO_^ԥ9#],e&IJz32(pU]HX00w @M h,Q*Fk^.9<$}(NW6:RagF"/ 7! 0 t礯mpXEzcJ ,ܢa䐤Xt *5P?vCP q`9-RLp7ܰ8+}x4˖ 'n I7iPQDDUho-uS #\@KeVѸd*"SmnБ )d4&jci Ѿ&;U[]y:AiH y8 c@M\#@'&D,ƿ[ x13>9P+-dŦs?ȹ5v9؀5Jc1F1sɭ`)$d5?az.`~0y~ypk{;-Z6n,rr=V|? ڰ 62O#8AN3dr$a\:LcC!fH'hp6HF@/@ޱT2knL 4uԢcX4>Q3J AνCWeA~DEl2Z%^iG)Z<@ `PWL JPSJU4A.®J]uaWmI >6`Y\]m -®V+6gWeێ]3T+'U{94jP9G8!l8f0(O7.!l2M=&c!!'5Z׻[U=?( tϦfKmp0 e\_E[a텁h:i}gPƽU7|/m17&WΦYMì,f$ m~Ȧ1V],覲TTp0W/i/9v3|mN)bxЄ&nxd &uD9hkcN&zG" $`T.# 'hyU*Ua"$EUXуaW.CPU%z 19 v`UWCaW))2mgW JܱȮĚbW `q8*+]%hu뵫xǮ azHUX5n:vU*AQǮ RHN]%hW \}0 U%N"Rq}HvXP~0*Ԟ*A)ȮRn3]%ᰫ<vխgWXaWZQ~PRîshn=A!v7zBC A9&DJ AbފۀMo0'Un.}8-qvP2M-4[)&_+> "HS>5EǸgql\=.:(cÜ_3x}c .010 f1grH8+k9.˛agjzH_!>NKIXÀ bˇ\K2I͎l&EqfT$[DvefYJl6mjeCiMmnD.G&XT pH7Z䟠M/(ʿ#fEtޮZ>m_z#np /!n(m>,;ЕWgvcʆNW)#ZtLBZZvkXD Ͷ|oZhV􀖝F9(ZN;\" /7F t(WTub++fWCWɻZ%#+!p6WCW7Z h=P6*]]h> )ܕ?t`G1'7psi1ڟMۮP}('oo޼ tsr>!M{}P7(K>=>,!ہRǗF>`m2=~|x]%@>H~m!ޑ6zvo]Cigן=O)ݜ_b o)6YipwF[̔;>iIyw%dջO܋WoKew|ZXP: ʇ;eaf-jo蛯 wg|d0?Au8{POX>Ź/Kė@PƦ__OWo QZޅ4lwƚ)zS>Ìl?7w[H.S6~r o˵@]\ngT6cgiVnbSs uN>R5b'3d{)DwoS]-l\٢ miޛY.Znz-M6=iAFmaw1MG@1ϝdwnqej@7:-q>e DK&dkoluhF<ݖYK%]|Ξv vTK4gW#`5L0_`aX*Ø3$B#KO 01ӹhИ3767gPtdQ3#j)&r(=#m̔7[F_h팀:cl='M#:+:1T`&?cJ 4fUϙx<G9f0Τ1%`Z 4As8֔V8Rp$Ok$(Q*TGPykҕA[6i18X4eYȠLC~$ ڥ_nݢŊ4dY1<4T(6M`Q9C>> ]gM;xu=.7)d>^~T7z|~m&,L%ض((#K<`BJo:@GvY,g IJ3Bk] $Et<)"E^,J芸P^ 4I6iy],CRu4F2]^K1/z@ rat_u5}M:ANdmo=UUBN ~d}gȈU̻`9 l.$`]0~Lv?>x}ͼKȓm8f,M&XuOXk tdDrHc.-rgȋ9(P"Qn9B-\PGY u $$4(},9,*m\ *ZЮcD;v@BBxJ-4jw ,hK9U30buDZj3E^l\5v`mBg$1 Yӌ,rd@CuqB&"\G 7E^eXJNS NJpC/RLbl[N=u~B`gѝC4M1Tf3R4mKo ^ͨ"EZHn|ã>擷% C$t(.e+w@ ,oi.ޏ;΍R  ]n8IL](:Hn(Ð* t#AyG=E#S4?0 ՠnz?/ŭ7[{:Qn! ^2z7_}Bqy`Bna݆ Jaĕ|~7 8ɟH v'}9=ݤۨo/.zrӤWbsO"'[N'~qu3puu=ͯmњ+98.8=)|z=,!\i\Xm[_l0~m |L5l0L򩳧d  $)2 έ  A}G9qP': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N ux@B8' !hH:F'\gvR': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u⚜@Y(˞86w%[uefuH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': $sZ?W $h=t'eV'1: CV': N uH@R': N uH@R': N uH@R': N uٻƍ$tݶ0{{`|IvVF4<_5EI-JLEY]Ub=L>3L>3L>3L>3N&УzOjg]jp}P˸܏H!%L\ N1 $.K@%}[H\zc:,o:23yZmڮJkoQ]I* RW`:$3*S{nnxT ݫ72UF+3*KuWUVLzJ$0!u V]er5늺jٹ{?_]e* TW9K+ ueb\uE]B]e*u_#QWȭ!WWWO#XWVWO#Wij:iTOPWWW'oRbTE~-3<20! 6袒QN|[3)bL38(C< O?~lX`5G9w"a=kL. ,gtͷOaQ1Q\`nM>7䠟1y? ˠb6`2[]bތ'ϯVߣ9쫏HĜb6̋n9>C&ǓTQ& mο}Vp_*VrS;Rx0w%w%) dkM&Wz& IM>&o zgsƈ2 #ƘC jh6O9WE\+1{JTw&㷫 U,D M@JlPǼN"aa0>$ɨ`:eaNpJ)a\J3>SYxPѺ" 6FuF]ќu*Sٶ zuŸuH] ΨL.錺ԾVӨ{-+*ZvΨL`]QWrm1+!(LҝQW\ә"PɶL%zJ3B7j4NsS;gE7W >N¼Rbq6s,*e~EE!P@קBA*TtBD\ު\L=_$yZ [Kl.ſdrQ=p#i5 `UretE ݏM]\̆~Syć?L-G x~з{@Qmpq;n)׃_vR!hu?yWoҍ4吔%p}X`[ւ1^ئaVV#L| wJs]r- X~6_=^-u><<6da惺.cI.O~lu j[R Β3ٚr@|k;-R~$~^.RQU-$_ͣ<t,}ޯfst.->J`sKKT=(JB2Q Ź̭) .@yݚrZj|׆^/ kKN|؋Οuս~G`G}V[Q?@&Ln}Soє c-Zz&wφt>=מ!JAC'.x%AQ*69^oϜkO/O`(2'Pt}"-gSTxҭٕݍY1 (˔{XX+ZT҂2LNrC9uMeclS#6h+erlB:uk}`h/v9ɖ )|{ Эrv7! Unz5=H-g`NY3 OSQ|ʘ_c..p`Q `#83%w7}/sfeT+zwD+uP}Oݞ.S@v˫pX 75ngz;k7. :]dt+ښ2movUw!{KZEsP(@Vtꂽůnu"-oޢ֫pjI`r_Y\%P,y# XP{cw;CYpKT(EEJ@p *4;s/;F*r:w a9G[kmYZW!@I_fט_( gMGl >_ZzaO@P$D'#Dṿ&9N` 16 Z("ZQ H}=y;r/,m2ТavbޛrSᢔ9綹.*`ȸs, 7_*мдۗrolRgb8ʵ6iX+k^>zGfc{`3d>L{%cLst/A7 84IDHJ 90a7 P4}yI]@jQ\քtHiqk̞(%DHF/© ,qUT*iI<:(S1p"Q'o3s@U `T",b;@Ѧ-yNU6nL0q@R3B>zvORW#q$k ~2n}@Zrv7/糫̯(8#s߿ʇUƺ"0z?MC48Q9F"rX*)D2Bs%*ϢbAO':NƋٗdX_m0~")v;I&=#\ܳ^9ǩeɚ:r.iyh"6^&4٬?tqǧJ0Fpw1|7Z޽xX{]bUMɭoy[#WÄl23~f<(ꁁ˼e$љzv͚n='?tsKfSwZ끫@yHU?Li{%OHXуK n1nǙ!x2I(XHm 88'zO\kGiQѭĩJiy*9L8IyXD'1IO8K&Sk|zRWcIi/چ> {]EqK"۹x]I0:ژ R)}I%@6Q'd:HIw`oHN;kd4Hc,qjk%N(ne B< Kpc8Mr|b6Q6S]Zg嫷O̬cW+Y*+j) 2ƻR,6NliL}7X,WJJ9kVS*GӑSKqXvܣp35 BhLDZ89e I[nTdvݮ0[WTv-ͻœMRP壒 INQ8Nh@6NhJ=6}?,a25!(Jt7,D؄( ZasdG.`X4Z3X!gE8o'=vOVĪ4?A?{f;C gŨL2 BxJ9׹ B9'(&E`FfLB3ccn(JCL< aO1Zks ~ Դ߬1rj6fS;q4Μ|'ej}9 o;v=y0hdq6|2+# #ɑ 'υA7ὼi"J"E%( 'A bLm%RJA3'IMQ; Dx㉎F&("8J#(A԰ׯ1rh!-rQ|oq7]9*y3ry? &VnZ_Eük"0K0[1=^}%ʉ"U%?4ԩe15Yhy#t.پ3NĭX"E{dߩo7ԉIU$(%`Jؓ@ 3FPglUrKBrԑh\EјSCSpMW#iÈ2I05H*4E}s-CI]<9kq9V(`)V{Ft2Ŵ!kۘ2> J$#Qd൵Aޓq$W"#}Y/vc՗5E2$eG6)QPb1 IivtUI0T0^1NOHCut$׌3e EDԈ\DBPXf"2•,I#I&Fmre:k -鬢Df>MOoBT|/8c6jb 1L#"&x>HV "G 0(bCGmkF~bZIʭQNV$ ㌰>s4T3%Ljji:;/R@.,G\zP|j@T$3,rG6X_88&qz1ڗڷľվ} |N;ctkLzQ%$Zgܲ'\x=h)e)B'[ lX|(ѓ}!PgZ&&!.4D$Rs!g^s31 c;$;h<ؼ>PGkr_-SFjou]ʪ˼ۿ~J'M@BlD&#m=ؤ`q|!\I9eėDZJn8UHd|; @{?,i9kQL dFh(0\q$!I3J6$ g-[JC05E3f.N`p`8!2i5 2)XDZi[lwMźd|y ۲2"\йJ¼\YH[NZo}cn Ѩ KBҲ[hE1N0,{El7 Q\ϴL:iCBzk4aH2ST"{]BVqV3uXA#c $& qO ^{*ȵH ? O.M$O \@E%dDʨ2PJHH \ȹKs;}$e -d8zrKjpDUѣ?55 J)t3'mq1W/4NᇪSLq_˝29LFeä}ΆIuIH54D-fi+ =d3ɵ5hd a4FT+)0s׿wAJU•TW񢣣%ĺTy}~N8a3SL@qe8>YRXyw1krS:{zbzV7<_Vή` ` {gvKD*3̗ B6$6$7M6O#iQ0Qy$BOs{ӛ^N2rKnuZJ|I6 KDc JEZ gJp٠AozFg{ɋ?|?;׫7/O8>ׇ̜`qfjn6S6ߺ^mUmyאZʠd>z˩\6թG>lYe㘏A`|F2?OM _QA4K ̟ l:|O?DV" oVu;Il{t;)iWģ-D9vBILJx:<@g'=áayvd9gC_)hWZK!`I8E<)Y#pi4ITbLI\s }m '>ttYVu;OK_$\6χˇ,eȻ,sXUֻɲ0(JxHd%YR葫VtV pq[}b]:}m Ugl"eD."Tкj(3qweERc+Ps{̚oWb6QuJ٨i14ߙ^= 翼*fQ/fgEǾW(?,@l,.:UNĺHÅW{C(چNE QN؄Dvc%2{t(y8ɣ#@RE $OJ\(h4PbR$rQ$O4׹Ҙ"S(I!Fc#dHіmFmAa/4I>L=Y MJ=!q>9b EN%Ο^u)": OR`3́6!&QH`o gQ>8OM]9ô˜C\ 0@,H2P  j_#tC"29^AkiL( A@2I`i u8N컘'/8gkzgPwzD*ew_{jC)/<\B҂1D.(R5›ʐ.C DTBod*@9 VkQ Syj7}|C'c]Ԃ_P V:gZ{ZgDWCQrՆtCpXG.|a=tuR$+qy N;z".SkGr6V{deH)2H:A()C)\Iݻ3ם+7,w[|W5h_/GhHǢ瑣Q8A8\|!څD!ȍ[ xPJpB&ƅm`*QfH*O @"e5p8r4yY }ҍZlsq4mH围ںYk>pOq-ysHb!KsdǺa,)HҘ|<$J ` aF;zSN6ǢCEB%B 7āVN⥴U@H\yAD ,'5,՚3ɑCsT;*6%qGGv-}?AMޟm[?Z;dj >- nx 3x_q/gxdLA+QjђOcH$U :gc=YOy>7Σ+MqqwRK3G#@b=Sqb2G; :~吜Y8s)XsZ@G h!6;i$E"R˜*uTa}EVhCqm@I)j۲ĎydKRL<pi}пϜ{f5 3Ą࠴G2TḶږ2y)c"搻Ω;=}Ȣ:(@4P7I&"-և"^09:&1 _p|z7K@[gs1oV}uB3.~⟣ϟ?T8M/E΢1i+|.ϋ) ]ot˳7M7|qKWBM~^h9n ˯89%pr`m~Qq\gG^+m/FKM 5}a*U"iw9Z`O\l'oEo@?g~8yz.:q Yڶxzk ~ zƧ(fZ%9\˳*1*3Ҕ>WU%[OQ3(`Zl[D1FJOj|Z%oB)Qj"/++a)fa,aࢿ";X m%P^YZcY-]3M*t)h(ԩnc7\"Yz>ɎaʯLP7<1{DZ<:vLns޴AѸ~]\3;z,ksGsսnGy2[>a;G|w2C3x.o !;f6!zQz7 Yvw. <$cJNYp[x`%A[ZQPmӊ}_aF}~aF-/0NBQeRh"6ME{7 MT| x+dD1V^@N˽JFn+ 'w-{ϫ6no}Pʷ_W?jC`|ggBN=?矙|EIBxQNǣ~9hPH+D# VKQzch]~H{\Wf|J\m/|^d7:]cV^ͯ_7gXT5@Š*bfgqPL_XFEm\wJM[?ۛ V}݉tܳQYeîJMXoYu+gsZRfuK%1FQکTOR1NlU*&)8eJ3(%L_ЩR]:M秘 tQ &cX KE4N9;OveHע u̙oĢc;&Gx'|,(gbR&%U TOH ɘh xlPBߦEþhfp{J-vYv7+-[o`rXmH$û((@JZ1^%Ӗ՚﬘/40>BƇxm#z]4rkL*#Ip&8ԏcqI6,pCG,X"XY,RȑDF4XMtW}uuuѡ5X <08FN9T )y,|U(y]3d `1R1ahd#v&Л׈#(>DuR19=N[@jR͛cG1b.:gte= o+ gNVxNij#IuܕySMcRl\I`Rr RE<*ZӎqB\+1)Em!uQpLDU`CL )rYJd.s>ʱMcٙ8K>7JȠlG<8T6:+s!6YHO",&bOL2d:u[ q9& .VϊzyDd$FH%% *c]J M&9@9痛;ߙ;o7$v7\6_ jtgsߤ|hWyyS?}%K{D%LJ}H, -df<`m.8Et6ΐtږ9fI, hU&eDl &5%W3Tؙ8w#cw\3,L3VB;B10(xz{پrǥ&w~@OO Grf\Hd$!B,gN96J K)rIQ*u[pGZi,=66QCАpdZ1tY4뭎ĹCfڝiǪ-E,C]Nsܢ_F#r2!>po,\&bx6>8 !Q"^#š,jFMđ0.J,Ңag܍͢~VqS3XcD;D!mL)0DryVaV "kRF錗eO-' vw3FGZIkCeH3!j5jn$R?JZdE1"v&:.'Y6ٙ j;\U iߥmc'uV p"p9w2#sZR;\|\دH`*j9O\)A•2F2EpU]q5*pUfW#i h2 T~0,jI7Jя/`QݣtW^:ԐM͡FoKVd"eS}jn =wl5+Xf|HZ=YBՏ߽mf,{̑BHGeBkGIDs9rUt 7;cƇW^u$͘*ϧ"XOp{4i.o/~nsR ~ dۓYfah6Ti櫳T/ @ &ig,q$I)x.6̔ŔJfe,Q|Vz^y ɯ}u׽멁lJ\A j*b?r>Xs )L/o19ȠZ^uol\/e+fYy.AE-\;ٺjBLQav y%V}K%K-F'^QM$^Iڻ3N-HR*rT*8 #Jh^$d *똍:R\sRg:DP1"% :d.B BMS;MU\萌99>ޙ8K잢Jebff"IiJi RNn}yq%wQD,c̱Z0,׫^Ф3}2b%N'_V I YSiEL""as4S3Nxz1,Y顁ӼY(!*6M6AEg /$'C@-UZM!}Q(RfLfyX^oU0kQRd11e!EU9>Qfv$_|;miCȬ &HIr ̓~(K=J5$$ǸZ;C֐VIi$PN2F-rZr A+dJivRYY77;0?95@5XTGx.NNSdF|.z%Ly`T_q.)sӧ~}XMgNeҜi{wC<<${w6,yPeC@9/lh)~8ijpz? 㩫&[V.p౔Q&2cqZ 97]Rģ񁴣GnDa;}4>TչodKԳ/'ÃZi{0:e'oaz_Wj2.hXrT7k_4~j/f蟍7 fMa4q%̙|˧'۞gg$h/ _}iמb%0%X--jF,n==YUyOfo0zbIG<9;q*N>rQ td‰7}1$UɋtyoSU}bKm/դ/;鵷xr/?ͫoߔ~~{͛G\أW{u74z`&E"HX[!}MijoҴLW;|vu9棯&~tfm H'XrX_N259J걫9V't*M~&3?W4R w3R!j4`Qf^URutKk$s_'eÃ{ 9jhG#9ӁELr#n ՍN /ȹua ̄6K/9Y!hM0%st3L'VJKiVΆgn*/hG̶sh.-l=#tGM eΕ`yxD@]P{ a pD$j *L&RŠM&FYiQlmWƆmVڲ&f5&U:=+զ=[~۝s 8s>;!'2΃//ʆfZQ(Wk筭Z_ B#X !ޜ ? s݃-@գiI|nݕ4/p/yLrPmc*h2fտ*:`@5DÔ\̺Sུ $+,S$4q@"lur*pS<s76>MåפOL#5=ӏ\x6wtd4>vQA,wVt!=v99##ȝeKI#Cm(n'djfo5Saq]f:b>01 ު 0JӒN4/b0 )sJJ A&AXV*fmHLTҙ A*Ȭyqg,qAe3)m7^ѵ8ӓ0N>&H+kV\iS1o pѡwnEj0<04dWbD2)e)"*"G3otЖEqt @:޵cٿRȧ"0,3lO&褱fDYx/hy@N:Ejgo?JLv5HQn  Y/}U0a6rGⶹ\V7׬w),Ur}jY G}=551QFI)>ݰl:xu<.;}g1|\)e1Qlm/:ޫl;/]6AG뵇p9qyۏA&/'~%-j[u,YW/ޏN_G lg77yo^|^c~xwj~mwr/2F?>}8x,sڦ1^?.&lnִiz_hKw$\(cbV)$j/6DF^q؍~Aoj=TbPJ$@˳(a2q=J DHn1ZE)V߇Q7j.TVdUgY6\}-SgaQgMyGWk_\opr`{{'}k~%۱+/eoBUEAO5*ԧz>5;>/q7Myz9G;߫ԗ)kL3vͻ;nB?Sŝd׎&͎:/o4&4&!hR{0}GeX&y3*7"O57ZNM7lb)l1qQb~Xw%rQ2ӣ;{ZqgҐcr%yWWyƭlc7>4@ ƒǪUJx[@>(e!J})XaJzn|La: @;ܯtsx]΃\ɃgxW׼X$|#Uˬkq[r%)~^hr/}\p"ց$HrY4F85 T1Ѭ-(wSȃ?;4h) ) xd!\T6yD)Hr&(iIih9$5LYBѐ=S0leW!$ &mk r1PI 1Z* c@A@Cp.$AdJhQJ{N߳kZIms!aBRg2.QmfJjZa>I8+ZMDqQ.a'CRk0-`R8D(f{E"D?q&̀oM'ý1Vz|kǙE8:E"Rr<͇b3"ض /9+f":)|K@}r8AND2N-JQ>h4YG9VLQ=VdCbwa&wC+"u[dY%9Jn0Ol&sg9C/+f/I ׆(S pZQ4~+%kHz)C1_9d2(`K!i@"#5\'<3Ȁ8|T2-WŹ}'ٺޅа_o&,Ecp̙V8RkED:i !R4y8o޾4i[&7|j0؝ ~;oi|CV&NVdgMs]Q h@P9)Jx.x#22CDXŀ8"FcTvwmkn1ps%!S.5OJk!:B>x!efKdYm Npolg\HPӈ$PKPI9?P؍B&Mw%<8#P\;~J)jfO"$#zPk˅tJ1% @*u2ХtyxJ]pUu8˄ 1s@)5w.Xzфոn]J!ůu\#~[T ງJ'~Q }r]49$4+;z닡 <90/Y\8#RP.^)H磨X=5+~E0NKj//yj4B02A|GSTZ|o(}`Rq;; E)ˀ>2O+H2l5TTDt:̂'O))qyA}QU {*:*Pn2^v\G1::F%hz]F8UՇraaOt%W7=߲lh9I Pt+/SymY{_Ŷ6lov5vYyFg_xlV %LھM{7w8ޢ77p0^ts>mx~K9mkEw~._57n^S6cwBtyA;s?yCHb3bZLBA0Q)IJ:5/ĥTL$ЀD؆"P $$v2rT@ KTFB"wjQ*nM)/RmͺnݐmZlNyiY;::+6|`iԮ*Th#a<$57:Eb xC|z'9rv>pڿ1QC;h;L*OyKݕ~*!t$V`%H59H%I%-wpB&(8ue9B4`3vGe8*.gˋ-\z]u31"(!ꔸE9؛S6FA#$pYݎxr}(IZCKp~6=CݝLF3|/EY[qJ{%\E,ּK겎8puХ{Tc'pei=zd@aǨ\礐BxJ9gs(`*E8G=5#.e 빱1$SII iSG'¿(ȹޅ~ \Ɓ#-RlH3~wt'Cgj˽j%Ag>ATelbgpt A"mHR gD.͜CCG9յuU[1}(A +54rg2x`O?/J qAW:'ґ}Sf\z~(E^ӶL3A=nUm_Fi6ޖ0[sJqcJa8@-M`$8$U'2ӌ)y'Z*hVT\C$HcFHGJM}N(Hۭge&P=H]Ou42QǸָDphiJ#AԲׯ5rr1o|u8sxfr:&C49>k~m$1OjL fALW *ܛk2EW&%Eamr'ADJι+YRy<\fٷ'⸎1pӔz.hCL$]T3& 'hro>nB6ɸwY T^l?EqGXCj G1y#\WK!$>2xܸ̳kK`3-l0Hyc7GUC.F7}N|bw}M e3 /} JP\Xd%pJ^kfE4 x"-^yؠYmً<( g#WV  R#r&5R:#m545"4$R 3S)PRt錎!Kg:pPœ1BK&{65.9V'I^&rh5pHL(Em18(8Nj"0224C89[3]"LIϗRpkCm]c՚rե' .N,RvJg6: 4* XP0`4 );,ZFYc>9Tjvsix+%KJF4 Ax %"Z(@a {r+g O>}F>`iV9{3+}1[~(B$jJpr-*%FtD-EJݹN+:%58,;}[e_U w> tjb3w4%O|VmDl9UreN*;Ľ$:0*L,1`2v6d1FUQsN:kNEt{F 6C66n l}[^sx2h|@E}̳BcEՂFrF)`VN\ ZoWL{3UϴɃBc=H  ͘y"*%me6Ug,3s]R=׻ X:8dtfpP2ӧҼrp#˕aVwS2 SBMw94T%N4 M%ver_Ծnբ bp+DIN(pq0eaHCJ((֗JQE%Ya܅e^*a2P0U"'\59|e^\Mk8h6vpj8\]-Gek^O[2M?r?X?weٻf%9EXwBdY;WXX)gGwEWj<  K,H6%XJP%X#GU`Vei"`1SJ(OJ8%!m|(E!PMkLv !]wqoesbDQw[wS}DtYu~9|ߔ8 9YZ",-Mk: h% a,QBi+b:3-ަzBπ<_dpQƠK uu 䩂Be@% 9h9A6yb4yjV;d[lq(d*>jay d%TċN4i|/ƣV$! $Vcm""ѶƲE,-MW$hMƧ zlb^ĸ23^hS6$gxgΓ.Y~ڥЧzzoM~I[/I#;' )W|WCD+}m/[xvq,uNl<8 Rk"f&]&D lJqsBͳ/u_q^~t~pwv̉Х^ O6^tاkCo*xmQ)i۴x-]ܥ9K\ mj\csPWi[D?1ctea'q41y*mߴ+tŻ,F(%,e_K $̰\$2QZ%2[Y:Luεwƪa\m'`GR؏o491XB^1N|}νZ]c7^}(v6.eKɒ1ɒ88>?rFO>f)=pdKL.`6J(6)yO"zOG?,gEgA ^r':jv,(/g^vTIK0!LiFwnt2Ѥ+PK\4S?xFte1 7/+<9v V7뿥fеmoDtd i`e ThѻUd'eڌ(W<.ԥrrBQ;IOZd{ZQ䪎G>"U>?x ud\2~[(hbI,=t`^FvsUOz׸WJ<N{#v=bWªٖ Sֽ>?q1u=:T8/qt61AZXUfIO_+l~ɞ?^5c]4a|اVVu+:6zƻ걠S7o(S8Fm;u 6&5fꇕD\3ݻ| WT2B;e͍*&$kFBz{nK%ٰPeax[B|y@*}Lr1ʺm͠)7:: ]CNYЫzEc~Tjq7޾ wv6S+ܒ 䌣T1dx^+}R&a5<.2mz5z3,7jŦӧ ?}<(M>zNG LbL~>%3|,w.gx+q5|_`g+>1gӋgHGdAi6}?.fWt?uW/~G9T^ K1h"et kB @lxv.e١z-fײeIp 6M)B'Yc2E\Ȝ,Ľ\EU"6l8km8}@$SZkE]F.bʼd폃VLwSɂ;pN\&c]XF@[a FƬ3%W|+eނ,k"b];Vno,Y% ΄hJ2ȓJ2QE2"VӥH+y_l:iMqr\pRGR}wm$ 72`\wHndwaL$e7የJ45H?H9ÙY5J/ITYi()ҷyq̇CiǮ! <=Laܜ}VX ~ĥ2q"VLՏF*?tOH@o;^rV'>ތƽk_Os]/B3{`!Xx_nT,A^9HPࡨ!_1%#\R)b H)Kq {mB hVɣGS?<J2}<íPJ0E=Wy͍ة+m o<瓓Mjf|8FMA!`uH49W2*ڃ &^[Eڃm:.JE')kİJiXA eBH)c /ߞPXy?-q\6p&es߮NƣItMs%y'9);<_dЪN&-; wA+lB=]}ǡO O f42Z Kxt b(XSsbiF}W_@C}_q?LF)bx>#wV!|u$gK)z!u1uy|PVœ' nm wr:e]nv6ba7EٝkdAge Uw(>nT/}=emy᪍|Pne.} kt[6>XjĶ ;>zv7?p\tu|fg.w&mهΕO~u8ϳ-v0xzˣ'0yߊy..ȅv3ljW|7͘9_rC?6^glaZ2mNHʴ-fu0_Cn/& \żɟoW?+iOO +w$?M\g p.$%kE/%wy;n ^_\zrzE?sy/d}|> ]z3yc>ȫa2Ǯs2\=oF\߾E-滻;x%h>&>b5X6T}DCcyks\aϓch@Y\]uz^Tw"%P cN'bKuDs:c]D/#٬*-OlcE;C3g " phc&x2(ZHAU:ɞ6vuHŖ_yl臷Sly/هL| wC ʇ-guw}W~re-yE(%*Q±+&)IgDFq,ebJhs  U;c0Jj6$% ߚm$ 1dqՃPմv![C2 uaJ&N6Id6h?tKQ7Qn\o,MR[--v}UvY:EB7;@&Bߡ˲ ӐbrT¾!/sMit+/c[t=hۤ84.tiZZ;=(6e:l;']x%,$&xJd:Gx)>XʗFM KC3ɑkmغ!@Bg aP;Xa-w@|`BDFEkx#6>cce,aGY >Tp>3v6^[*J)]I dIHЖTϩ_gÛz~CgēB#*ŻRg;3|7k6>g kVAc<<tH>?v|C?b7t=Se@jbhrVΟIc=^#N::.dMЀzI ւ]nHO6|)_J@$)ZG-, Pt!S4b* >*9uLbY4!9b!Xam*CW &,tHKh7p|nʚy0S7Gp2v<  6LЇSAvӋ󼬝᷷hL@,5u9bG贇..@]a8 6vxHPAtL^[c0I%DOU)MJIъYU>XX}V7o쓾M~l7Yw7JŰl{&49Iӳl7w8s֪s)I wHA%(~ʢ%89ZUN#Jd9T gڶ|) -cBJ|LNJ(JaQ89@t2:RR)$Sސ"'Y٢ t>/POݒ}wˤZ^;0K}7P/s_9MO6!jڛffALëτWLgtC|}M:V<~K$(U":TГȧ}޾(ґ7iI\ǐd8ki(0$GElQ_1%I8H)K\ha+Ne mGwmCZσPe5`Qi~QŕX_Ti7R՚_|/MW,㙮tUEqU/%[Z3eZpKϘՒy7{.N+l&{%᯳kkRWX:,6uJw86 yv9k3Z2ʫhm\Im(Qyc#B1eu%lZg]-yIR/ڝ6 ƣTqWi:tR ;ޚJ =O~}8Vs0=Y"s?ԧ7W)/vP_ݽlJsi?ض-omnr V[j[my-o孶Ֆ[F5zCwhޡ;4zoV4zCwhޡ;4zF5B=Mewhޡ;4zFCwhW0kL«lZ~H1hS>]u!*х[{ SCo`$7 hox'kB"f *""$ E ͊0 >AFEFkRBvʌ`*@Y0 yQ9C-;arVX ̪gcҥrk_塹M]Җr o\z&lB).lFd ԤV+TFQ+9g \ɦU.TE,w0nj,ىh$SB0*ɘ>*AD,() 2TpHI;yQRICX38A+$KAe21ʢ W\v68[YoW (* ([A09 s!r>B] $@@ kC3!MעzyhPU5am$C"mR<#t6` 4 jyZ2Q|>KbA(B]Kް46Z:U-e ;ıqP=ӉQ#mtq*8G:@^I5Ggd6 Qب4s(G)57=`Fedwm,]/'9-О 4- '0֍,7-pnKlm% ]rfgjz}a֖}ϵ|n|;C`xi!#Э'-F+ ytq8/ /wÿD'Ÿ(ðu5&2y_cOۥ[cO Q"N ]XF spb$dCV3䵖NrcFw((z5PÞvF%#1NCA@F2 .+MVHM)7d<[2*5pZNf:/\,M"G xw -)9yTۨ-rnGto\+ny-k=);9Jf"z/7 J6_⭠}1yR.1TȐ8Z'2N#ULuPŸd( QҤ(Aa9*۪hu\"t򒠘##pUT"{A]YP%8͋$cVw:b5> 1RKm!ȹmb,˿dVS]IEt)DL@\*>_2*@KB'F%$Ǹ5:iPHdCR<$D&)'J- F#r#e9 ٧A '5D{翃_yǫp맩AX[pjx5njH"ϼ?%viTawA#JfD3j׬F-!.ՆV>'Aܥ 8\.st)af"g&X^Fw0A&\"Nzj\ۉՈ}\]Ġ I.,ArZ6gb.d,CSN-bJu:Ukkk3 }'sKXՅ,2+m4!2}<&rBtii:shLlF64{ÞF6VyבlN`2 I^"2JN(x9^ )&hQhQɒ~Y*0LN.%0[c$كU: c"2օ]X\AWΕ01' Lu-{վFt#&r%^!kIXHBР}Iģ:kNX2ut=sw1k/8{bb3۽x ) W_{ZD%Xk}2-xJڃsRL(F+j5=}2ː'+CnzRS  ͘Y"*%men7}< <2jx)L0LQL:p[<ײ=+9;$Zzʐ6 UéapY͒פd&y知ekSq1h8ƕg rNQ$HF8^,L9RJg4v u %L(V0KBB2Y/0 o(A2U"'j"sqqkq<R'l|O?i=&v7;Gz$nE7}cӧ&w?wef%Nq;!+ۃFsm2yN9|wQ$BYW /G0@NZk/h"ٔc)*AU"GXcQk nU& 3K<׉Z)41$ .p&,JAqe9;B $IΖjV^1!x[R;nso7wS};Et pC oyE?s1'kTV$c/[K!B$?.Ҵ$u `KTPʨO;gx>9M*z:I,\!iBJB(cV[%NeGab D:iE_jHNgl,9>y'52 P+F>S%)J栝QE[= #ouZl]Gۈ(j۲ e@dKx !^xhs0g.ƣV$$CJI06*1&ƒIDm9@lI5!lllGq}ϏAIL'2z1=4ۏ%Sd=|'qD"bف>=6{53u ?98zmrLE Ī?OÀ `LA?կxG__Ul/.zz;Khu\QrmȦD/IT}sq^oHg%feHl/s odb,/3OXӈLdUZLUKfH2zퟦhJ?i¡x5?rIUf`Ob\Ml/FkgĀf_w_6_V!*@qDˇV!i]v%؋DQzc,s(6m3MmiI gDL^W%.\SOFqmg;jL> F7I/tq^&<Y_v7LhUO2.>CB y/ WJt 'Ѩ7@k,;_Y"ww?,8k~[0ٖZLgLTU.ǏF2'5+Ez53媢*aΕb%liYe2׿I^<9.A_n:DG fnq63@,~o&i֦ 8~d01cL<] 㽁ZG;5os=|{g`M>k$t+UXG2LЬ{{ΨMڍyg=7.7n(w>0ܟ tqӮ+fPhywݙ\s?YLu[l22d&v,ʔ3U@D0$xI\{-5$48U$UZ /@wy$͘I$ʺ$46 @;4t9sW:If^+BO=ڸWQ=^[ą{/6+rTW_'u4a-LɀNjt3 OPZeRQՓQ34!Y`ٻbV::'@ tw!)go8,ja9_^-vf?,?U8&Ԯhò˘FUmFwjm~n~ /[V}ӮiKjü.;dz@.tto-bA@鋺%3͊]KNs* 3A*#[{)#0k%If׎W+Y)6 KA.{f3B""$!,g"8JBWN$CAto_F2|?}-美p^nm߀X6>{jXK?"'/ fyfx-%Bc1`:myg3ɸ9{&qj{ k(vـo;k?gP᦮@=sN[bD̒ dDj kadP3Y1/4j.>AŇx9QaH>l2HCX )p<04BV1Ԛ )5DKH%-ϊQJG";bb’m9crZjMBZ|@A!{\OӉwhYOuAg^G(}vѧۇ_5)~?->G>: :5:uJ[Ym}>Hֱ|BQ:y/ZuJ*Q)h4<8M)+MW* UIFg {!EB.2KeNr_Ec0-1"{`dI v1xy!PlH3NDMI;l@d 1K',c6Ydڭޥy!>Ng˧ "1RKHŶwE팊'M&CoM-wmI id~~`%68l.l%_m%R!)'E58p鮮ؕdyۑlPZu~Hw>69k.{G6wjKް/}I*+oQA -d0*%!eLy.8t1(h%fI"@FL*.s79iqXr I͌yUZ,cO.e.t<*3< ;/kO?N㢲?/ΧϜW) $!B,frY('/V 8BF]"N rM,jG(TTbEu6޷̇yQu#b #f2"̈r`ā`f2p+΋*U_E[JxI{X:]M,g #j@eLHuȌJZ8D!eFl Ӕ82]:[}y> (zWglih́ai( Jp\44T0ıD㬙oYIld\|P0AH?I/I1d#~w]\z[5mv>!9oU|̻KEwe{U?j~Ьn-| 3s=T_ݢ)Xщ ߬:MZL>]s׈Oztj--i PǠL <@UH H;,ayΠCiKC0B R.H'%f,)Ge&\ 8![UL(6rB 2X!xmReP$RۛryhŦ+Brfzj7?Ʀxqm2<3$p@)-wZ088prxDktVYk(ɈV 61>:eL` E*ykQEBmn ήz;%Hl1Y5Hpl0FijQq5(X!=Py|44ޜ(eNqu~1RF"vvUR9ǒ(Lme)c 3ӂ"~\S_2vXzSxD-8=qҡmiv=Nr}ɻ|+>9yħk 4YD ܄:Haf&=/}^^$lљ i~S-%|9U7?$@NљKl>6Cύv!דے)Vۓ5ψ\y %,a/Q4 mtfcrv)g^eN,vYOCWzv뱳yܼ&u-fi2NNWmWN@~nyOm4o}CY[޺S7 6mV+O[][ns7ӵOxuW7OW=6hMW<`k67\F.u>Լ'F.ߺzT0ۇ0oy*G׭Ṹ#7Otw[~`ve_^7< ~ Ed3iMZkݳf"F-ā>6הvElHI,a%nQe$_-'g->Jmzs1 c$p8ǶطA.Tpui0YFǚObqv#+F6=GeAiVm2?un둴2։C?Xr_VӂR QzCE"]8Stꁔ*ߨu @ǗsRo^ٻzxpV|P7-gGJ6]V׭ܛO9^7tʽ4–6uiTlMi* ;Nꮁܔ3PjAQI+TrR NEoSv|?ܜ;nASۈu C a#aGXb7X }IJhAw=+0blR-t`+j7KW=5lPZ-2;Еj﮷Lp5p}X*-\7cz͛l<W80`Uߨ?Oj2/Mؚ>oTEtȟjRͯo'ZE1jG)X酈 4G dZ7_y߫(L7~%t"/_;|ѢH Eo{vIo^ J\0^ %%kϯ;k gJ{mo~[<;IaeU2Xy9"suPE(*+v6✪E޸WؾhJmBp+ a}+D:]s(]wE ]!\e_ u"rHW #"ழEW ވAB:/^z9tOY"7[1u"bHWZ ]`gzCWgЪί]Jzte$g z*]` Z}+B+:G(IWV$ +kD_кk]r=+AwEhM +eSR+qt Ǧ:.]֘N(mvteڻ-ut`NjQ;|Lasv4́>i=~VJlruQv~^Rp(w(KFB]D5HjgZUv:yJw|E#9dlje2h0ZJjNΧRS:Z_Rf3S=r tq b}q ]w \ 3(wd_ ъCx:]ʵ恮^]I%VL.7}+B+EP(e"7bʮGYJ)+zS+ ]Z+B`HWZwzDWXgxWBPA D28ۧv- \BWArҚ^ ]Y,=+lLo:BXvCztdOqW9RkEW?b ƻNWR>r[v;eFkmNW;F82]]ij7ctv+7]o9[j؉)^ V1#{$]ƴۋ)ݽBYn#4W? Zˡ;]5l{ z"`)lo^ /n/5n/>+,"J]+B@W/L @vU_ZuB@W/jy;B)wC;OWR聮^ ])Ǹ=+, ]\ٛ-[BLPa%ҕvj#4dB.ժtE(B|g+9 ("ɯq|y95۽ʡ& oz64c+r1~9k* Zk se˂O?ߣ/odzy<'3wM[LUm\ڨځc.i24оl˿wP^<&#U4#S}SO9o ~x3i%uRE,c< .gd84 /ݴ>iТ)BTJA# 0v>oߑ>ZZ7ܭjF.MvΪOZ+OV-e[< fɳ*M? 'hxmfruq>Uf' it1# xDw~'P?{WF\] >ytwJ%qi[Hj6,rNwU:f:E5L[Tcƒj/Oiݳ e^S t}ӽZa42XT ~U~9?zʎpnn W;}bQw,AV[V&;xoa"gMFQ'QqkZFԵ??9cGԎ?= =ٿ .ޞt )e=`t^?ͺ}1&9E߽{"#~wB g5{krGEoK5[l?]>4PT?6<3]ߏnfu<2B3/BNM^shi_`xX*m~01tivvR ᐥ?t%^5ݤuޛAY:<{שsO&ڣicz "v#ST WL ۠(_1U@px#k^*HM7HԇY"cYrH9FGZngk8#Z>I@PzLV& hzL0_}DuV ]e@,0\g:V&EOk_ipekglx65 e՝]yٚxx nnJVIa0RpYQ}镬ϗ‚rk"OG kvʎDWLDЕTJP:Zp* DW8t%pG+Akht%(Y/tutŖl4;c+vZJPYZj.] `+(t%h;] :ArVv 8qAkhvS+oSq <] \;̓AAf_ ʸHWiF+zbP ʰLd8EBq l8t%p0Of4=U;~' V]̬:AebJy36:#s햭ԟ{ְS^a>#72>nzUrǕZ-Iy4{:+{+?Ɯ UΔV^BԘutg2ʐH @^qdvWPnM2\d^ 8] JF= ]~ϧ+ABW'HW[@t%Е kЕSJJPڅNlPP͟WGכQJІ0wJi+t#HCW;] ڗPZ)ҕԍKW8aJF7 ]S~"YX[= ] \? ]Q (BWHW1jeGj`"3 ] \=L1(hAABW ]=Mߪ`3՜镊3f߽vG8r@s]zk*2{Zr^ank3?7_r? Z^y!hRt@A(*HaHcMTJ DWk?9atBW'HWFYFRWJu~t%(]U`c+ohU:_]\\Y6αſM\b4+N[x{Tz;ɜ&p88kC*7_Pnnux\|o~F\omkޤ_Z67#[*l~n9>Uwntq^7+wͷ)0=#ex ȏPOw|J:?&i_An~}wQ.W[ly_Nں[z(mCoMMrdgTnlw!+ {l@{U?~ݏC]\{R[7{cFW첥aQ+jcNo}9eЌ1{BR*ƨTrsե .ݫR9g]MV4Ғzs ܍+o]U8pogN]e,Z-@MַZj((Va%k1"17hFR)mv>k-6 ÇO!vR֖q˘\u ɚ %ehqjJp-wOKOA"aƮ#x34fEE^3.KNI9_מ &83GևVw4CR+y݁6l˹Qʰ6&]J'M9w 4 ޅKU֤{{6;[̯Uh]#E55D^WU#ICvsyJ* b Zڋ#^1'Bk?} ԜN$QUȫ;;]SI{k <RIITamM ;rҊJuľZvnE"N Т-)$8:FOk& fvJVCJ ֪9P3<$1ڴCԨ*x"GfҼXL)&>*C֪+הOQਧn|"7%>{/R.՝` :`Ҟ%E]#hdGoOPw_Kԑ#1i#(x!kn #.6dx L}辬$kdL5E:@vdm^z ,:?jPZ5%᫈;k6TF] I|w)tWnê. O! = +/)T*hwPSvwxȋIZ KtPK|̡gmG]I (ʠv К&6CܖSBEٔh @DWHPl5C"BjF,,{t8-٢ A+YyAȚ\\"hΤ$1S*JJ+1A2~ȃR!*8vGyPkU#`Qy0 !SWI|pj?ˬB A1.'JZb~B#`IFH'YtgMGFYI"[6R;YyVU*›,edIIX  /^q7 I `J ڪ6s{Ꮬ3`y$:kc@%50)"76jIUfUXY-E q2bxF=xD³" } PL*ι.8\ Xw)`k͞.pJК=IPRBK @)ԮV3e:DW,(!א:] ]!]qB+l ]v Z# #"]!]AB)]RW]֒C+@0p5U$V +iYz,L3f:${ ? .'+eoA) #]\* ]vXY/ J8̦KtUUğEWQutUPr,*#] 3ܜZ }r7"]!]In+yw3`AkvUPZtutզKt$Wu }ywBiB:BB(M:DW|;I Zm Jc+#4]vg!)_]$]*(C:P):DWU;c7(h J#^ B[g {Њ=wCyhOtFF2x`(8F'DRE'v9jf[fvjW }_R4Дo 46N̄NEX5Yh }u/L(4vpWq0MWUl4[nΖgM4sp! >5!\Y3 U62^I=ZdS3o~sgNNW¨V|7u\VeN}q _SܶK}mWw7O.g *:dSh妛nu[BN~ΖO.j%|>IJ)W6[}Ԍն?&j?=Չ$Gu)NH0srG%.4'Y]{D4릩t.+qB*?gIշ11HgbaneiC/G#?,=I6{5^k7#V9_YNHh͋QLe֏ -MOӝ~ ;7s޽F9r AS8kz~ tXLW鿛_?ί&.xnF -ڀ.5߫{Y b}hpoLJ/\u8{,oV[ tjE v4tùIB7)9SnRէwv}2F9Z,OcfSQ ?op=Hɴh66v&|,!{_$d͟zo-[4]$ -[:jSqϼH@L{a 3*GWwVQ`7(G@U5ΧEj4z娞F@Zg!pYS4IچZ97@u4s`_[AF٨h:&[6V=M#T\o4\h0=[iUpsߟF$0So8.s@}>K +sgb%x<nr͂p9~[hy>,ؿm-[6Y~G̮wf?;OM5m?;5ck8m_ 6-jD;W['[TZ{RZBR^Li/7ONa ᾤI%#yւĊZ:@X1c8H,+}@*2cwUX/Ӥ.T]RBKkR2D'RȣĽ"&9S)2FXBsĪx;UKְ9>'8eR2ߏZ_Yf^~(mk1cfToPIE(PIqXLUuTBYd\啐攡{ӯUׯ]ׯV:sL~>{*?89:ڏ:btZRGQը?hĐH4;쓫Dֱ&*9S{FN2q4Qs&shڹ6?ex8|饢/=h_uPa qNS,8$uŔI&3u1up1[).gc9ݖ;`OSFvDZ\FgtGd;'rJ 8hq1E+3-g0:H 4Hʄ3!S:RkNmKdmuVc 4m9M&:}*?Nh<*MaVi6g{۸WY8 i Iۓ9EӢ0*Kz4-pW+يN4 ZKZrYpftN`|(8k\S< i(TpZOOW`Gdu#pBcpÛ9ծ)f\~$p -q<߱壊ur9Lӧ_YpKYidR(@iix1T_/gE|rV0+[J#e_~}R}ECY=m y#[Lλ< ۠`oMYu! iU86Pr+ˠImlHE'կכ>ŋ5ur\^{ Q+?dbd4w<)-7W{h_(PzxrL_wZp5l05?4\﨑U74PYzw@ކme3rհb2RՆkUG7Zx  qU.0Hbi&q@&>`8uv-{n>|n<&U$JyE&:5N:tr]@f~dO9(|B~*+-(5nOܒ 3Xv?0M=rO7aPrWN;ڼ\឴elJ쓯*>0Id p]gG|)6&Yϗ77T&uC;!$qr"(ʛIff c&(fa# (x${r;+F `<% Z$y>i2Kt6j. -гkǀJ6[$ge%"u@Yg(g\_ 6{ ']DSrh LmPbvZ`>Qr$@=7DK+Eo| F+. `LH$l9 FY, j0 LL~/Y;=&5M::▣qHFk9~cOѫxb=q:][w/n xi:[u*@0^pEZ*΄㉛ȄDșځ<@a0"B-g8(Xz_g,َ݅xHKH&38ZBH^`sX ݉/+cp_bt9/k0܂*{zt+6$ $& lٮMZUjώ(Zl]Ұ8p aUF3?h2(k)" JhE $`,U1.t٥AlTV 1@n:xN zJb7+WdX][]c6Mu'oK-%M:(sB;|IB.&gYN q^X31[csRyAXwJ21̠֓beN!0fsQHm蝲*$%Q΁?Դ Ȭ &+yemFiK$ c dWA v>II0IBDCQ鍊8'٥[qvk^W6~5Н:$Pp 7Fꪷ&a38C%Wy)sJ")=}s]6%u%+iZ S7'|S8M'CR%3ɽG`(aơ4ɏBG~>dz0:?/H  Ve $1 AȌś՚5dOWp~&%1cTǿy5*Yi''NjJ8^$K%ml2]\]!ǥyS]zU^kSؔ -Vog׉ 1g8o5#+nHMx=#?mInI{: ލ2[dXr'4L>ہ_9<.n{8-׭ꀹu[ )+bo'X.)&*c8jlŚFmj≟Ɔ㦸;bo_^7O|~3uA#H67 ?ޏ{u޵ڮt-|Հү{ vhq=$DIqqz8%yWzŖ9a1*;rY鈫&$l26X \( Jɴcfb &1M&FY4(vh :u#2 lh d|V NuT-W}E~2FBd ^ h!o^yJJ& &,TV@x,iut c=6+Lg_EO_Z/Z :^| d?[>(  g]@F z9Q & 9.a9߳Â!pvk$kҎs)I[S9&f_PQR8/A@?G ܠPJA"RV&[Z'e1{6)P-CLA) g3DNstbq3rv46oU=#)@n4R6Ǝ{rF|\ 82tl֒^\xL &k1 1ØI$YN%39I]Ae !1\(a-iR0رvFT`Ff)Ya8'أX]ɾgٞFy .X6>o'\Xycs-ko9zÜsZZ$tfk#K\d˲ "eA39j8T4Kg IAQO!0ͷ~SaGpkl*E26;tN2^F#KdYjJ!C-3ՙ5V[$"SX=Em mAdK&oǬ\:N爙Ec;؝sqp1;}6Vks8h{; \Exb(mdwH?7VbP D*v`H2$0 ,y9ruY.$)"9h|׹;#n}X;emw6LXc_4bgFkDqЈxP0y ಗ y90exQ$",}$Խ6$Y((z&@.Gh q"OK#HDe!v쌜5wH:^VէXgg\^4EuЋx+ǣ)DKӾK), YIp"#sZY)z!bowI:.)#5 zAr%^!ģHV6k1lX|UEd]͎m;-@a$MLv\Zv{t"" rFF=7D?)K^e696 WVg!܍or&$,6h4ZTBN /CV̵y|ZwNF/ޏۭ}6.F>_}o~y y-xbN"wNZy`Eqdz"[Q3e >m6nHZqi^WҧiPVUl ݧQiF?4"iz=ʶ?VwFM/9}O7Wo||4\_3vwW=2ND|ձ[ńZdӁZ^F[AV:HH*ɥfoѠ?|R*e˴m 9x)\96EJ-.JvʹɺOB(Һ"%!L8 Siɫd2t>'So71IE^!XQz#2.36+H1HQ.qyG8SE-<^ 5hzu+2<-⏇bMć!D(3̀)kI#7OYt%R ]B$A;So_ڽ 糧i3WD._F@Co9T#K};7Goqd3R⪔th?x߁#"g6&hOVL0q%s\8s[ECg3k0dx`GO1$dϱ:ggI'd0p R*$YVʛls T,pZ(@F5bgpֈ#ZܡY.X~ԔՇV^}=9*дJcG {bfFעG>\7w]w ?A:cMM) `V^uD9ϾR[c@$|:bJ30VUoA w!@xi*ӯHn}L.H&-@-uV&NXbv,?;fqwOsjsߎ/?@<aYe=_zC1߽^<_Q} & Kٴl(vd:+v} ka/Ӹv`YɲT%ܲt,՞*=7WAbH*Ah]h)|BLU4jAl\$elؖm[//¶.hB2#w$P AiDTXfq2TQ&"g΢WVpi,l˯ܶʶpڗ/dbɓԒ;sμ4RY2J!9wkFap"z6İuöP]n6}ksFyˊ;ˡcScu:]Ƶt鞴庨r}lzv`v}3} ƽ醖KEq .ߟNh.RKu"L+#+}Yok}sͼ1m3hOČ>鳗VǶkL_|;O RziCla޼~\j]<;bmQjhZ3sw?L~`O?ؕ']ۢوfxd/G31oo7 Ծkͷ2s?tXA:mD ,Nuk{r}՗?cy'^^J'>ؗrR{sv{Z[ҊЌ%ۤ:[CIN.qR&'Bk3 /TW>E218/3Iӹ3hu|0|"y r$̚š?0i6 7J)}Yf  FQ΢V3SV==F>!¾oe*ݜZanu%BX6*!}^ /M*JfjdNqrjoN첕-tx9]hx6.́袆.j.}^w #g,;t4ьrQ.%!W,J%Ő޴jNlU{*v|{oJuFב.'< :*i{hCDުނ QJ,Z`2gaQ<Ʌ.CPa4-3D~x|{*c<:QZ DPiI3dhߥ= CvF#ktRIkшN9X^O~ʢCiyڠ>[ Φ f: OK64-'ƂC>6p4 X>8e<0\4e28Z{LFFXc]֜DNtCeIlhRAu"JAC&xSfM'cGtu6A;fh3Q^q 8态Y?uQD$ l/<b/V=T냋Q]ބtlArڙ7hT̨x)<]>pRi6d0%`3BΤDI,,rL)zG@*d.s.{HDxm Κ_'5R+00 8< 2ǘB c@τEY5c~7_~}i|s{INISdTɟOWXZZYwcӿ`d# p/yj<ӆ?H2ΝP+JbLok%G#eC/eFRMNaWY?lүqY&vߣFu[KtekkF_13s >?lSrEBɂּ_Jg UhZuŬy,-KgK| 7tuCW_nW~{zד7nWf{yl^;?\c(;/ :JיDEh}7nCH9dF>`Q!i3xڎi YQLTSJ/-Wz)(*`5֦*p6UA ]"]dDEtEQjY ]u J:A2F*++l`vEWUA7 ]"]2Ck2 0_WZ誠w*(@WHWhW50j V*h Jjʬ9^ nٱdp3Pj/2ЕjJ?KyCRFÅfSiÆ]_ h6Ox9fmQU2Fc/nziyE3j8o q!K;3ZQy).etMu7.Bq/K:Fh`>nӳĩy *[bΕ Pؼ OLr ѿo3] ۈQ)jP膮v4py{K5Z ^Z_pʭ~}ro@`xB @xR[N#]]4-Zvr9OHnP-FZ"{ Z.{ JT=A+,Z*+VCW@6C tUP-~wЕDr*h+ ZP}Rtuth0zUkT-tEh}RN4 PU誠?-(IҕaUj*VMIA+t骠!2z̹`1 \[1Hh~N;]r]"]!gtUz\Z誠ޫ+Bik=[]mW9a3H26BX|Wv]m=ְ<ޞwԈ e!kV 6xl {w}3 mf֌L) 4w۫JF`E7xGsψ"l0?<{t 4D9X-ӥ`0FJpt17!0;,!'f#](oHbFAN%\ĹLV `U.p E }W„r9{P§VjQ ] %<Е0zUXP ^?]CtEW$d骠f+%JIΐUCWDUAotUPp{t%h*+l \jX蛡T:E*CtMk)*hMO J] ]Y`t2H Uը+BkmAB tut "B] B+;]j zUCT;wl4t4 gE g#"[zd/\"{ c5< RoAJVRWCWUA{#PvЕT `˫+bA+zT,(AtutZV"o'\S"m dtutCFm]U{,tUP⠮NFjrGOn5tEhW}R S+c5Ul롫1 ?,(5tu:te&W;FU{;UA tUP 'IWhP1Q]JTCWADW`A+^3fdGWW#2l'9W2dځzKN3aR_ʎQʥ;'s>za~x0Cvl` W4l.Vq+*X2S.pM5RKaB ='5HHaaI}ހa#붶*{N~9T*iC2I9mHQ)WlÙ\Y"W ],J.\\qC <Si+ T,rh˕Ei=+a4"B3/br\Y.WJ>ʕ4R˘Ʈ4E 2t(Jr:rXOtej\!ZJj(Irr2 DW& ^,$W{)WMc+ XhE,Z4%z3rE;؟`cW^)֯pA?rGcW~(!zMrlBZKC*ApN&ږ HqDU./A ;) NW,n}!ۃȁSx-^i)z>Y87dXRB0!A)hY k uʦd b,e#Q`rir?[.-={}Z@]h0pq5음O͏yǦ9« g6j }A;^6B4\|Ȅfw|'*9L4PeF7EV" 'pwSR(Bh2.kvjOlq!J:e\{q1jw,d4reJE,Z˕Ee=+Θ &hwt'JI\\ &)5ɕM^pȕEYreQ]\I4vexA(ph ],J IPg\ш 'BZE *Ή >R j ":&Bh 2c+V Qr’\\i%ɕ,X4rehƮ,Z~!E\\A%]!`Mh2X*ARdzv@7bѕvi+nQt28+ٮ7=9cɅ xk3Tϱ2> $j#163r|hWw>J_~T(^aKz rO B-J-R$0q\!`iD4rpa^h 3I%IP]!`*e4reB4e$x(Y}+!  4JHM]%jJ*JH0$pw] Q2b\\)dDr5Sȕ+X,rej }+ILrexY& &C+LSIP4H 1]i%AE4rhԡ˕F2i!>ʕHWk<3WP\!ZiD z~@4Nw?讓A?bǻgEW^(5 lW!W<ճ]o83,,DsXA 3qTAP^^֭CoC^D)Na*E"+ D#WвtR$W{(WH"LDrEDrp U:xBjJ^ \!` s/%W.3ȕE+DreQvTW+E(D5e5G,\M2h*\Y\\q]Y+\Y.W%IP4QRɈ ~E]Y\.WPIP ]!`r?<ʢGWJۑ+Pg 2]釖(t?2+!W"ճ]osDW i_/A A~&p%ncPآ&{ S81gRx3n&W >sG$W{$WnL0+|,Yvo\;9V\n'Ȝ+;|sWݣq&K)Uv{e A=߰ 2uŸnup)[$\dHonWJ>%R׌G׽QOV)̦"ZY[4v0ͪQ o\dNK| sz{@?qMwP6ܤ6mMsog.e9|?|ur|^GÙ8I3"?Zz||C3Zc4E Wg}ެ:no*l`W~ݯë/kU_&&^B̠!yf84fYEunPDf=S~gEq`:9]sL7?޿>yϰB1,11 BUW|r=r#cuu2β Am{\utԽԿYbsx{<)ڋkۛs oqnW7YC¶6^5WCgz2)ٵKwg?}\|C|y);Oi뚉 ‹l\2rg8n࿏Ix2xm۽>0?~|Z^~!RJl ]V[ (ëVjl|cǩ x׼*Q:s%Il5IJo-F_Yk$ tV9{ a0(% #J8.)*M+J9h5w4Fi4F0Uu0j8!|82)2 oY9ene=BrPe\BJM:vbn1<|ńr( ᒭk_}U>.yQVVx5[B@وI5yI'- 'f{׼48؞ lzg 3(%Z^qhgLY1>)D50׼N)F+HfޜZ6-9PjT甠V{ j2g(BLGca1*y׼ED(h+h=sאdZ m L+[MQ=X /k[׼h ]t7a ͖eCҞw}>[$ŷ#y{Rkh픲-D'z8lL͇_q,@Ҙ92oϸkȕ[ #H6al *kE N)[ NT (z؟Rv\C)-߄RJbWJޮ|O}4/S 꼸_}uWYɢsC>((eM_UFKQ自(j`9I*A#Q4lPG=,VMYќe,;6 +n3eE_hsO5ӠOcNH?Jiӳ #/9{4f/ۥG1y>.F F?/ ׳iہ݈#NGEMWCu=v?]mQli΢tZJ0҇7^L=K[3i]v&mjм}Ġm^m4T.}@ˎC%܎D$:g +a{Yɽzr,k#ǖu_^<+Na_#6_~]8=]=޶>VJb[-k4֣bw:{NhTg/YO>|{M>?;LrWqA(5e_2*t&/IM5\4 J\WU6H&#S0XAc13״u GO=8!md%ܓ Z-:rKkAA,;/%{S^qi'K"Yrr,;/CG-f0؃YXtQ@{y}÷hwFXbV/#AǬcӳI\1NN&f5ȸ /3,0<ن/DO#LAF_A_ƬOub)0#/A'&~D$F ڑѴi!a,8]@Gy٤P6 0l1fKPt8:9gL3XQ^NV"-&f7+P_(/rDtBMXx%^1CcSW'frrJL"Y;֣W [#Ĭ~PMӱNpfIYƽܩL]2fQ#aZs;R'zYؿEV2by)hP֑P ZeA8ȚB])Yc ǏD &ܔLl~Wߢ])ҳ#?q>E?դ?Q /ZmuvFjR?v8;QWy5EtSdu+ Jڼ?w=wNX&'떦l.u,rv3  : "Ĝ؋J!hҥUr\'2LK^ر0lfQi"˜ tRܱLN#̗bYDt  O}.ɚhb֎V% Ĭ^mv1kz&geRj1XA%ger"m"0ٱ2lfHE*1Ƭ~N&99Uv hVzƉ Ov, Y@!y4vڱLNיyc1"<rƂ^/X.4e41kj+ɝxbV/vB4keeRfvڱLN*LK<ce.ERM8YIMcLp\`tNJ|<];!|zy?urr>+_<״&VHP|Cg V-04VfvDΛ%ڲ]dc _؅J&~<5s,@XdG~ѶHʪ1HNd R @ @aDfJŸWӓQUӍ@;nqJd˭R}j9q{ӭ Y4D)5%;1iYrAPEUe]ɉ@c mBuZ3~t6ߚZdJt*)ku{:'U˭mw]5K3=1Y !t?YОHfD3>^D%atJKбzS™1+-g€תv+rtm-uLw̛›7a Yre&ApLCrr,=KdTWz/~5N?>_=h^\ͳ3$(s/nOg><4/xvCH!Z.7@ ^8'=uE!_(z(kmиLDKtNlu^wSZ 54*FZ*75)V.ʦ-եK25ՄgY A{oQ}9hd͒llMXγA>8/Q~]%vrc1_m7^5g+SB,8O>k&>'؞_Pq2BB~夏]w<ι`]r3'ӅthjnC\6ycuz!+mIPOd9cyht>3=/ۍ$UuY$eYA9KG:g13Ɓ1yJ"_~4ZuuBsU!(-R@?.b Qs!y7ɝJdJ) F+6wc$$7 2Lzt뤟~г/[&8_֜qg2"I{׌,"'3( ~sIKj\4Lnrev 8 $ĩT=IY {f$~^xPD򩧾E NT . c&aX&ii4* ("̎۱8++uc{?;) Qp2-!@ƊrEOqXJ$>xV^EPO! GW!9a*j'3esʒ,vυ{.s)YXy ;S+XSK{*s|D Gw# &w]IE`l:|JB,8Ӊu2\baNvi2FXBTgmo0SHdwS`0[UiNn4, }22$xpل=R@(Ôwm39MS0.&.ъ ki ֝҇ՋE4DKEM'#_ƟUG̋=8Q (& ka!D12.6̅^Jq&YVWŎsb2f ]$Ӭ4qPHWEUЃ Kfj(H *ge.9PL3 ?Vc"KJIxQ|pO/uڦXUv]CΈǟq6H.D5_$S%V oزUwkNi@Ϟe`T`+&y3u(#D<o10 ]:m(SҪr.7r>elxװƵAi+=Իtb6??z/L:qǢ$FUO?Y-|X/;xRf)26|2_,VvaM|0jحT>̪ڰ '4k2V3yt56|B}+P](CSkuS_xi}b纗iWG?Vͳ k|.Z]ʻ[8fU<`QݯŴ˃YbT^`2zҪtaW ުn8Xf8 k߽_))S*rc\dl5O,Zu4q2 F'YxKWb >Z`բ4fIq/rhr':zUоZdϋ>*ӰU{ٳ͆i<Ʉ `hǫ &jvҎb0fJdr *>$ Atd(h' Z'4q kCKjQw燐LIJbZ..p|,|J+ 4OY3N<LYUPR AxsQVch@ ܲ1PpNt}{e@ z,[KHG %`ۋW iv`2%UlÜZ0aF;G*^4SZQ(p; g"bXGpOGa@V;cZ5oI]Y=3gV+vG; ~UFSyn?IW5ܷ-p婮aBC,~}|-%)@#_[=&p=r=3EG0)0Xm7Ԕg2Ù UKnr2Vpt`]SBD%;! Pɻdȭgpʈ'cHnwyAJ]گnʆ_b.aPFvP`q>07Ln(`x3e;jW2 &]5 xcyǣW-)ZVKW m y8i=;*!G6Bgpށyva^LRV"tihjukaxyft8>`JNpS&͸e/(0eI.QfS$̟jN?ÙA' 21~F~ !*ۧYU%NoGr?%!&PK4Ut=D}G:J At8*t`ǫjgT.s(!I)F; V(Fz7Pb\";."9.ϊT{s~S:6۵ TA,-eE h;Ah`ӯ6H"w4E,O E87 i{w52y'BC=,SIXQMܲ]PFƭ4M;#&TR[MNYrN:KTQ)7cv%R]k |ۅ\ _Nk3uw^v'GL!ٌeQ+3l]w _*DA̫jf=5 ?52nnugdqSk\ "V$WZ9CFKjZ[;eS-c= yՐdΛ^534Y5^e{tM,iͮ^1qe5>N9*en0:ݨFFێ zێzwͽ[5\:k5FN$*|mVag01J@uSa)i:D,HT)C0`#.3S^9$W9͍p;puEAnͩ ,wn˖7Qd0*_Wr'.̄ !Yqk,7n)(juVBWRk lO7 NECr%sl IЎ3E*x.N:v[)z=^4|ey2N|sd6SRɆ$[uڅ";- C\L MFR(bDJ(k겢+Nk=[pm6v:23C^;.,fZMGel2~n0UUAGT8dұor*Ղ+[ӎ[,yMGܦ Œy8o;Ń}۷e;`,6#u5jdcY0Gj<͑*`,op:F#uj?a_[OcxLqv2E|i鮑я݀?ntYP-\8sUQM9湥Gv|T DqlfyUZe7(E2b닌r~o=QS𥉉oRF<#Er VBP| MƏ1_/&6˫"%@yzTvZdMvC}(%3o(U E-1bOn|dy6:!g$]S !$;k<K6}bojFp!B!Y^9*aS宔~ l|uUYN |(<>[G=L%4V_/e7ozzXjz#iUian|utqD+ew5F<3$M G?HTgKӴl?:.>JhCVM!6f~[Rz%Vm 5Zkoai)Jņ"7 yI]=Y}pZX,}?O%/2m㘈+h EU!geYؿW:@cvNN,{T2^4ndGW#dkK6 P 8HyfDzXD&]6sMjc nQŽi .R}OQ; ]IiSg{yaPDi(J;j\̻߭fw>ɝm䝽_|o_~gDhxM%te3lTr>/~͖1ϚrrsQX zg2+FHMfOM Se/H]jJF/x%J4f2q*=e2{`;g;.aҩq LgBH|k^m̶x` * G-(YĭdVz TV>_~\`upa1 q^hOC{b& 8ښZZXs?fM|e^ ,M2L1pL;[]``yB((0I©GBݑIa  VjWU>Cu:FmP1(BE2jmƌ>+yo |[ʑ%79Q+8:nVk Ԡh\UyBD9ܭ }8L67$4I2鸜N7%V5\*C #"G-A\Ȩ9oW֔y\U sbjod1Y=5S Pl 8F`*AmjdT#8kubʜrF!OpG؈A<~ʱ1 /ƥ̐Z#=XM j/ ܪQi%ܫP 7T :"-tNDS-5\^`"} C߅7i=U1BEc~Rir`p.;R~ȇ,+Lua]* Gd xsT#k;W&|G7 lw_QkK8%? MG8q(jR^T9MD*srj Lwd[w}(uA_ϼmjd-T6 t{ F>%[toȸkTƑ%?i[ŒH)@<+ Xa-dʢ5UyB*J7{^fS!.']4YZ#'cweE1ZY?p`7Te0@˜91 \76 >-LY]a8st$!RpTZ$dhhT;V*60CB2o6 ;@<`M6kdMR/3+4|KQIhf(I|F=Yvl_uD>mg0 D;%yKYhB*#Ei k;LN`;E*L bT֖bFF/zk*@;#}%ՙ$ICsQ>\ qka+]#i=/r:xA)KrDVN?G[ RiSHuῺO>ٙp)`d(AM>CqmW:o` ս$MTaEId;|qV3)BAAR#&N.؁4 ŎxQFeY_ 3FxF~yciUYMUq~( OV=m`SjdJ\ i|CV5_6"N 9u}( m)?+;ACYJ~0ppX8yfB{Ha\#,dȗ]OAO[Զ5VKjm,3l_UXU Y¿L+; 7rrFCw`%i2!]}3?5?fhjq=[%! XIa\M2-#I54 +{lh֋6S5 q9sc7f M~ ')O|ȶUAÈv 0C})fti0&m7$4cS/II^Y1e#6w~7jc*&$DpE4Ȁ1*`$ZSqT6U@:|x|*f$P=!U(/`#u]/L,s#z|]p`Һ54߭T|}TPΗ5RB& :`iv^ {t&8uC]ȷS;њ gPVi'uk[9?8ŤI Y=ԮqFkWl|&XݵD6LӤ(^8p^`ݦ-iV`Z{e Cb=1`骠;y·|ڱiPRPЎգ8+rwDP=w{NǨ=n5lG #-SF`#4S-* &* qrs^|YD "Q>(OcZC94B˛c:ʕ`g0f))0S =ڼLH 'v:F8ܭj}gmfY,y6MyuR\f+"*)_A6ooer݃/iSߪ B:DY-te#Xs7}KvN^&bbc3fjfQe6kޅj5Gu5hF.\+D{04b1'v/ K G 6*o* 0jtGbu]VCfWnc~k"kIb^|&XcQB3OzpUNd45)rI5T۞|igF7]ˀsAf:J$mY6^H# 1lh%R: `7ew jCcWsi`si^\q 9 /ŞK638 )LlxIp[i6 }eZBkۣlPA.8QP-}t)E-]9\L~˞MF.:Sj]r0p<&J];jݾPE[rxq/A]"ɈlswRis߳x@[G=fϵ$CSRQ(\8`$ R8ALFUt(!\f $2I'g6$Q[! )K0d4=SνU? ϳFAL*htoaϺ7fsf9v ' ڭ!/)^'͎Qks< S8SF+XA.`%0ȶ Oq p@ 3&AM"0GcYLqXlC3,6(,  Ъ,9Idb&%l4n+ŚoVv[GMRNLzgiӶT ETsCT846ο|AT 4Eazu 縎A7EEQ_:UtCj`7}Ip*>yM@5%6fas<_޿9*B_ H}^Fe%UuM[,yǪB7.˨(L4i[Ū+zh'ӗ{hqky{E"L cc.L+# Ri2~Vxm[D3-(By}i6TjicV#c 8pS~Z[Fd!(庼l'3+9bHi6.܇т05mLA7A6WSzYŨJP V! QRWiEbiH~YKlY' n֟!x<!Qۊˏ:",T\a"wNJ8WOP>2 ~eVa< 4],hXQ> vh ղVUq|xM̞?Aꝸ5ϰӡT=O ɼ’Nu#sle*( *08cZE!D!NNel"|/ʂZJ;5C]C,Ij>]$b۝g6Ldܽۄ܀zGD3̕"Iܮ1m-l*uaN`BӍ)e)5@QXHz69bB7YiZWl7 EluFkA=b#Xc)]+!To`g{@"yX,ɍ(/}H= XrK,Uw/%#~] Fy4uTQ5GWvTYRGUM P ,mR(Of8J V!1=CZ(=Uf2Ewg|pc:VYpZeIѵka+s.` ԥ2Xo—[k̕b.>kZ k3ʯ_g e|d*OF* "FHHuBgy0O $Szơ_!p|)[9nNSUNJUelUr/:VFA*VR+&}[<݁y˝6_; n>sBӃs"OۗG }ܬpjxG_F{sUԧ木r)h3<:/۹v4gIdkGFX!;u Q KE a4j݋lan.Wg_SMzlS7 |)Os6/_Ζٚ[ض-[^h˯6o_ OݺR>xq{U"3CV*"°C4kt^\1(%q}6A. ,o"fݞSl~Ñ,an%J7l+_mֶe3],+^Pg*qV\#80^IB?^Y$boz(0\I IR .ײ!q*alo)4’ꫢ39W${ {i.z:pIhs/EыaLVZNVh LPFWլ1eFfMlSDϝO5&(uZq_єxwUcDpVbb}45J۞5fΈXyX=d;5pf\(X:.)Qchn$ ά8'U_wfI**,A1)y^cZ 꺷˶WQ ׇ i2UC`PL%YJ%2N⠂Fvb6NLñx$LnmTW cA)TyBx}@a$ 2ݫ,P)|ɩ$$M3~ X:=*eVSΟ ;uj^r?@ű$]*@i$͒0 *h|5~[Cz>\H$f2Xyr* g@3xlI4[1; JK}n|y8`~ mQOHB~|YCO΢ )Xǁh*90Q O?6sDRF,&kQwP+e:C!l &? Ƴd6]`#|De yA vDiɘR HaGI_󄑔1qo ',G\e/^mz_fn1XzقP"{wM`T;O]{ |l$9ƯFTPՒ;x5Ju)}uj9ۿC#5mw0%Kf[xAEi^|̵8ŕM߻>‡j@R^]^n9(Ʊ"=p]ؽZ!@ocO/}::$ 0@5PEPJeJȒ dÈ2"E,`:=tPyU<ư0wW=oQ ")LkpB2NTiT("2/ T h*ebu@& c;5Q-1NCvhJ`P茁(K`cqh3e\ʴ4ٜ )Ćg_GHR5N0)Q/   Ӣ]굜E`g8ʗ6"{w{ #UpW|U5 'mN% ϩ dH #p'*hl&K)i'Oq1mC*bCY4P)Y3Mc‹>Iέs*%-p/,530E4@$25'"N%x&@jЯT$ɇ,^m|RM"LBSiʂr:k)26vbj5%}Kٳ~X.EYB) DkmXSsku?dA'8N6,W2)$%߷HJ3%;%S]U_]J"KC!{W,iTO=4h2c3 'JWo5&܆YEpv+%Nwm* 9W$jVQJ;8o]g;6Bw,V_ X1=DAp *r5 гlVE|[ ޣhROM5%QuuwhBəNmcRZGYPG'릡\Ri|/oǃbo\l-cA]Ƅi'?Vd[zOwp(]9[HRMZtA{{USĀ]GϯAʂ{ЊpMu<H2J(jL 4nL5z w޻=ky*S}ħ (2ʄOz"T Vhǒ. 4( e(LA:IPR?-ا `3X֤>UϨ 43܀$ (I7b]7)lLSTE#d"xG7αG+P |tR-2Ap5]CӠC痳y҃4ʅQI0#[4J%Zf{QIBs+-J8*Ai( -U5&_f$"-$n`*qZcqG|,i7|B:"ɽ$76n֡=l]g@O,'SDl)[v2GQ&j}ϒ-FKlCաWO/qdxe(xt .q*A։#'ۭy*u@DJe/HDo< 0("qnJ ^YI5Q%[Hjyy[rAaꓙ/Zmk#͔}…\ ȍ]/}NuSeI˲FMkK @0DؠBƭws7>T K.ԡ8,^iy=bRw> 4i^g& &vܢm%Wɗ&7k@khۀ;캬p25S3X`ńo KaO2,I$i\o2Z:>[^sPqQ +iG@X;ؖPc[Ky# 4hC8|h%ƭ 46>ddZiN?~:5yu:kQô~i2tW` Xi,S]GWȽFm$/*:$r:[pL* ^LijNk6gedjPx66cF](m%TWnT(̃C_sC ޽'B1UaFt|mp! /6!BgPF1Bn@wⵔλ#U9xuaF5h,^kn/pQ#fH./m*-lNlOrYX>p,*E jVcI61T](edvu,jM{v V、 .X2aei*.k嚬ׂa/(JSUH7؀0/IaѸR?i:ʟ4>6-wt1KYwz(EoJK+&u"ҡ{bcD䌡F/#Dn (:`%7J:C؊ZXx2$tKEA[h|3B{1Ӌ|Ѵ6f+5)~~J,jeD.=!qIR &K{{FZ~_}b|F|s}f |RPq˸Kdo~[l~;_6g_e8\qVJ4?3̾u]ur4z勿M%|@זܥ4/)YZ<]m*d [ofº72GiPf ZbkFڜ1ip׫KS$OˇζW}Lng$E,;F&j\s>m۶6苭Ԅdl@Pn|<1?ˢ:]ͷ拒_[,-RmVЖm[ o4q%Vht&x0Z+-%٤DL, "*ղ/P-JW=TBH[C'~MnW[/֗7xB7f~W4_x%xtAƨ':׽>VOIZNelM%<{F*!PJ͂pBK9%";1/junxv6Ϫ-WvSn]ϥ~l. j)Tt\LR=_nRy3i5@+;+uey|h3c9}swyIjw*w+C:/#pV F2!2}[_nJzKel0xNY$8E}9CI;z[7DS.+%"CJY[AojH.erZE؂f-r25$h G)DeutЊmٖPmo/y2cGzG.@n]=D~^6C06p6tJkZ Ud_CG}7Ut[ L(.}+MWQ pM+IF) I P[(*jl,ݷ)}ztV}}@YW}1=zȒ O%%ǵuJ1ʄ>9'gOF<9O?d+|1Y Р`CH"j%;CEqj^ɐRx C#& Yx-%'jg]juiK|#h|pY$h43nm~!=kTڻLJUd6i2"X`*P ¿Pк& 5P Ucw%/F~-FpsNm9fSs}]zQЪtHDڸI3&̛43olMLES9<r7@,bf3  @d"H4]*͜]5ҚA$} PNK9~Dg/`0=F:O@n$SfP>\\p>ĨoabjXbޝ_majX C7IAec_C29(R5rsHBp<,OҸcCmzw(XqÜw[k!?L/LLz6A  ilC>{1BK>P!# ?54kK[DjwV6vR;^J*x)@ 0-_";,6^4 aʘTZ!%fJI|<JVAF#c,̎.ާJ]|LIJh^~ti3O<6{E@:uzxzffxHPF{mcLW%jU<7~}  >X|5T,\h*;\rFy+q5`1k8'oa!2mmd%O2x%: <^YV @ BS6@VccKpAMF8:+/8Nu{.0ݠiOhɈ;҈5$4̅ɞTl"b *뺇Nkfe->ȭETԳ|6?b90.o+ְDgLn u }8.UPÒN|K . +[ >z^ɽ! .ga& PWVH7cAbz$k= g|O 6[ /@n[5gtoʃYpF c\sūOX9}B>ưVI,Us:R<<9%vT3|JRmfĈ]槷4Z0"%үj'w2HL;y\ԪBsj8=y%33[xPO %C>Ul`Z$^ wH2Qyd$=bcͰebE^Q㞎 Q܏dK9۶y9O4I*1<;jqT(q!]b].p`t  asi33~WmD)A5syEY@fUjdmZpigƙbU9V}/Uf^0bϳ.lήf7s}>8{kΗ&\fq? t$N+Pq>9PFCWz{SПzNCF9؝.O9`4t|Ci#ǭҠUCsur,9tJMZj,oi*w%W!]넦(ᣄB.H+Gm@Pȳ̲bPmm%Gh3Um-V[n@ί*Qި 0hxۍɺb]Ze*cw7 v?9H1';]zz²2y$]|O_oԗF0h^-Ȉߒy%pNM!7s~\Juw%# >GfJ4>;j!Љ$T{;{+ɩ61_j}Vd-y&կ\/;`uWoO^>}uBFdpҚ9~tIVD)}l# QD$w?K ʲb>sltbgUe,{e,{C_jCۇHN'X~-ͳϹ̮ؓ}cNp,IVՋvGƱu&Y1iuV3×Ro ķli[C2݄n[>FҵҩS`HwowS?yRF6@3ek ŷz1zM'G)<;NTi@Pe΄A4Drv(h}\>Tp_Q&cpXMA-CiEiRRٖ %-5j×FUsnKRx J[~laS9)aŤ1qϮucĩ7.#{X\k8{f1'ھ4>"߽-74vlCէ2*e}lkdhZ$JZ)"|=.F1[>$B4) u#k /f10m8WUk̥TM VKrf\t],a33 ** ( F+=!T_0K{?O%|j9~oNܓ|xkn~o?R:$^h(6ж,m6 mxYi% AŌ߭ hp:BAl Y@K~)3$A.zK ,go}PUЍۧp] x5c:5#szC(hUs5oc#x$$Sj[h:Z'2M6w1Qİs$rڂywVP̕Lg.23BC}{IC P`26yg,xZ!:2a|o2wDy~%ijRk8 < l{ęTKsl}-vwІ5}*amA5I uE/i3/Oyyf> .nDЇkepe(u6A J0cA`l{g<^BIBtZЅl#02GľqʋF[q즖|̬cf-Yg/W"mܸ<i*"zo2 1X6ul]wsQ9nyXmiB*sGbΒ<e[/̷W-LbY}&u|PHm}0}*U>Pf 0^'10>$جGEzw}qS1b.sp]q]GзGXDg#ٿ2w&,X,60OO bw}4Nbݱ3_EǎW-BdKX<h^7 -?}yos~zh/>̛tYpbB acoՃhĦ>աqpLp42 Z9Y-ŜaD8 ,dZLŴifNJ k.)d8Mf0x3c6;fE7Ъk/;B$#}&a>ÔY 9%-`6!M'ЌC> :+ڪ+2Ƒn`Z$k[2ޱBcşǃU |3tϢf>c]l45kW:ZA'jj|%4}WlOO5L Hٲw*X{a=Q<vg}Qq1Sﰥ F@lî ct {aVo6 E1w*X{gUfn[sLR[|q.yWL \2(G9gGoL4g/a,œ5,Ys7g;ǔבLyIs01Xc؄9d˘Z7kL$_p]~_h #;iK/?BּpY2PN 呝#(`s0E!K0WH3S XZ0h:Vz?#6x؎ &c b`̑`Gl0b%|~Wv0gvdOh6Ѿ/o{덳 jȃ'H]b79@ ^ud j`f? $Ip3Z 9|tL ck}$.";l^=AR]"&z؝Ъ(k/{d;%_;9oNyb7m ˜ caߓfd8PΊUsF[:E|^7fDOYyb l18Nj7v~S۳BwI9'娞zR&Tqe˼x>6B3姻TIS%UOT=URMS%w*A2peYáZ^sU]V=wYe5]Ile#@8$`||뛊mE)Bۅ|) s]檲s ٳJI[դs$T m߆b8,AM%6RKlRicGQ yxWR]]ď7f9%JshDF5)x%&%#ʽsةV @r9Jjo1pi9&As l t9C5$5Z v?}O߫.f[;ld-|\3KV=[o4{7 u)ѫl`s|MA I@1 L)eij5Ga|)GaH&4@Aͭ[j"|;eךxoKiC)$f:d:d{s6tȞC==;ių#qO}3_OM)E5^eZ <);sWcwW;; 5ˊv1э#Y8 >)uQk$: эñpoL8Q~``H z,P8vzaN>%"# bwjqٽvb'͈ D<1iJ'Q.Z"w0evB4fw,Nà1YJpȠoAk v-#=S;iFyc^sA{PAC_`YTC-TJZQʼn˗_7JcRj.*4`S+r[MJH%]m֌TF)r&!ʶ[lf}lk/Y*V@sh8\?NJ3<{:ʘ`}0֝jLPe\3Ŏ!|[;9=֚ 6[ec9L w0𥦮KcMnblvbQ1D@ {s6S֫ 湊`ZAwR~LkudRqʤĄ3C!%.Q(A}<>6}?QՓiua9 4]wiiZk=H! {QV͂ZFnыWٔK֋_Ǖ7?W^IC*MY,$*`8Fةh^ ueWĠ($MBblJ/md]dx+#͂%q/-O=bUn>f@%+i8łv4[((Qu`@8I!a1 Z@2VJOPA;7?zQ8|Dt+ژ+(FyPzLt\ Sّ*B*ìm # O骼{{/q/4ck3u/.* [5:ElĬ>]^]RSI¦H]^P imcN1$$*NI%PZ/ 5//UzG2/+XqQX6ڜ,Kx2Ǽ|cc^ 1Oi^C! H[ԛ1y$הUWsP 8kĽbJMne8Wh.y+hYB>vy)<ݑP;i`*mœuVkPA<~8?%jG=S"ml 'Nǃ=}VkѠv?IsФzvVanL͠v-^ѬA펞Y ƜTQ'd1փ jP H4Q/psAzSÿqdϣ`vK4ٹMo%1zM #r`<n0G2;nnU5.LyrlnP-pRgPI=)Q;< y$nP5vB33b kL^fln߼A^;=YМ1`l c #AX _b|wݟ}ۅ`b&\ }]uu?V)iZ9QL%d"XD֦ܕfeRM!A2lg[(f)nhcT_F/ΐ"~߾ 9%J:rG8ѡQrc ,+8?y..q. \k@".2Cwhk]Ǐޟ^;QAYūm.R 'r(.PJ;ynkhbQ?1X+6|!`]Պkgc͒n>%[Kw @>Bv^#53X x &t`I(i\?'-0,MnAPD!dP%A>bH1uqڌ~|Sz/>lRb/O>\kW4)`JGJWԎ&®#tL2:7c"`"olk[+GwٺRy'N~,:ΑIנR/k>+{%qď~]DuEPE;OtSnKסEgiQ2Z<O3Udlj Z-V% +yARJ˥3Jծ$W1\5`aZ5ϏZ>w3DW*%yHe^KI&Xʠ7k_$rx](xman(>J_ rFkFt;vZFA( yKF~N~׋b2ؗh-x'9@X}9KϡޮVQp3))ꦧ\H0!NݦCot@ꇡj_UvQ5LPh,.XKn+窶^*̱AX}rzB[ )g:,3}N)xfxSj:H6T3o;X(8frO5#,$a^hn˂2Ш:,{B UH(b3^Z>th <(01pͮ(Z[C0HfLy8SJyDLę~;99 ك1ĜC+ƏDwVǘs{q" AN\(| *jLNa;V@Jξp]?\ SI ;J]ն"܍۬&x6LWr-|aNw;WpSt.[; *'~3Mی@Gh;egn^1"{SenY[o[F(u{ ïfe<1rGQx0lsH VIgg]1 7KyGH1,5iEk3>/y@0:&H~Wz2!_}s;EJ 3Q#]Q^'Mvc. a+!댽>dSBc_P92zSQ?PʼnMfB~/RCnhUKj<`x!@#¥D5ECErYe D'hXw/!-#VLVk/}B<|JӱdM,`|=x /׋ϧc}ՇKF\-b>!&q3v@Dٲ-rECx9" Q(8SÃ,f.^?gnI 'V[Q?3fK|4rP'-ZeZx+QӺ똖>X$u# 2]pYk_&-uvl__S`0 4~󣵱ʚN7jd˯)/Ua؏o=dtMLw>o+:3գlԒ BM7i%CA*vq4-fm}Xw1Vh@;m^ݕXdfE܋#yZ [+C5A;8hYjXc/:Gn'@;s|3ݜYWhBIF)& &h?R!}WhBݮO:׺/v1:'A|>ox鷓TiO/(dT34jQgQz ZTOw>B^I'Zqn?nnݗ(5 ZY_x}r\ƅ_±8R)W,&_OOϟRۇVpw.3Q$aڕWO[T_CUl؜={:9{\}EZ2ƿ.ltoJΰf9]9MՎ{LkҔ<-R_N q#tٰV.xTt0`[\1:kb\L9CΡ;igקdb7nxj[ gўw/PԨ>~=)pcF@l~Ƙ1 #9Yl5ds!p 6lqj)qi SF<\!SoAh HOdQ"]TGV!bǢ%b͚q쩇jht@FZ+%L k!"[Ε 2YJ";wKg/A]|ہ8Q1Npy& PR:+HMrMi`i%&`_皅|-!E/h8NSk%aO#m6  ®c|q`n E5l)?Z&[|+,K~ AfڐߙKeg>C2슣.W!֬5G]DW56yQ>Ϥ;M+5gB+5QIR;RtQi9 )3`2"RZk'E\JrKgk/a}wvj2E:ᛂ eoٮW848|>k?}Nv.Qvv*\C?L0}hYVВ~;;u, ty|tӱ~?,H{Ͼ^ zHYnly[Ocz )gzT?8$v=4\cavbzmq7fq Ezڡ{ţj_OvD+W%F9Qωmxl Cxq=p>3b,}{qz* ;<ŝQ6xN8p_TO^D[=qq[-Z?߸Fx1ѭ\j v|{xX_&ڠ]͉V8hA̾>Nt߱[pjF\U+ Ôbn=5x{~۶wQtu2|nƆT- [+K,xTg.cO ón1e 9%:(pv{ӝ ^yy gj{ȑ_1mz,fpvfw/dp,$Jv"[D75&ŇOUUόWcÙPbE *&UsAt0+M^΁*SG:L؃ siQla8l%T|CĔ Ȓ+yK p}?aYR=X 1lXszw̔8|Rb0hes.i,aiYC &:݀{PaC1AS}zs>đz_7y5!GLL@RqLS`.w:dg qbBPvrYM{hu)%&78-\$ ww$g=g7*E]2-جj !j gA,:9]IwYAf2u,#TK%++kW;Ѭ. %a).}J7:k}q9u&xܰN1 y׳8D$g |AwzJF^fnS]F:nЬ&Gɔ[ՂTJ5k.wz@af'.)&ϓ6Uu"U׻5M^[+#1,*[kw7̼ӣpk xzck[h@ t &Q%]UAw:4bY %'NW%ϧt0 6)̡[p~[ ' ׅ3qz gp$sfeN&vb\}RcdF`Dhץdfɳ.ag7x(M^JX r*Si\+8AA󆝮O U4)ukQoW%*!4ojttQ8Af!Usx'Q׃C$-yzG|A` Λ 2l<1܉=+-!l:4~j7 TQXE9lv70dvB(ވXBSPB#ݰGPrvӐ|;Mx{ݗ6"4 v֘ mJXB~>IգcԺIPv ^7}H9J5^7Flǔży(z_sK!,*}k[uIB`T1Z w"X y$N=*u7bŮkh[nmUOխ!{C+jݚ]Sw1ֽ-Qj&JR׍1;v{髍W|Ϥ>9;=խ,>Sz/^j3nv{Y';-|tWUo=E꾥*~ؾL3=slk`jwP+9LJtq?sKLJXj&+#I-OAۓZMM\"h T*j\Ds;F~ Z@z!46u!4]g FB|2kzmfjl_:k1og[]GgIٿW6ʹUf_=S]ooI5Uۯ:6e7G̾Ey%+;} ]PdS֪~r(̮0 ` jӑ*`j79JuVaN^jW]ؕD/apPyΠ\6N߼/p }<-nONw-UǦ*aYJB2j©NSC(7,ԮPj/Nlas0]L˾PB2&B^8idkڹ@gJZ'SNVv/b}[jBaU +.o PNtT@V1$<f ; C>Q;].ltIE: >9- M0%[Q)F}N铗иĨN|ۤ؃o֨cݨ 6>W;blR+az&ՙFk [Nd{mH¬m[nFy7%"Ib@Qu5M^O$!W49MMx4cPPȳ#Vj ϶N*T(KJV<0[(w_kۦ\tHm<6(yd(^}Ԓ,W9cIv0'*%s686uU]ߜ0ȸ5?ڐńf*Ύ~Y>8-IRYM-]}*k+GϬ=!45=Y R˥ €a2]3Wۅ*#~tア&/{uA,舕tg!APv%{㛐)to&:읙|uzd=>X&dd 갟q+Su{b\.>1*bн`)F]1h7y>}d|2/ jG [9Rq̜tEVqj÷vp=m6eb*}#D}u~ܺYE6gG?ߟ|%*{5SkN* ?{w:cp,ukoHq&e1bovy%|ZtYz[ x[qɸiTuM׉\wYt'k~QEZ~P;i5q/$qI!+^ @  e}&ײRc[\0NGX9Ra pBRRH@߂)1^< ):bfX<81 -/0~vAxKN(nPBVaNWӂ B퀋8>(j:\rJvB6@P}v:-T5^;8?Bj7 k~Nы;$jG:bN7Us|)\&]}IǥQw*wy7N/u1[B.vJ!8*b)M. 7E:$HtĜrRbgr绘G%TwٓuvTo/w?>=>ҡO?UNIM\>j?LgaL\_뚩Oڛc,+V˚ɉKO ~Z7Ŕu=ʺE]䀀n%{YjNj,ĆU~!a :guT3rqoTݙ$;g\p՛31bN8~I`x́z?I׾Leq坳U`V;qv%L4OۇKx |c5/~,޵67n#ۿҗx*HxsS[$\ ȲV籩۠$JDɖ"&5c">@ZkFŦę~ɿƢFO4:M>$<A#1Gh0ރePKAY`ܝʖH{;W/#<@ԔOMSq^=0ۑXoq ꏴhYIgb.]Rj0Ǡa=JQ_ڄC<\뀼&.ءWC >>7YĠ 0|k,gұ( py&*TX/$ZQ oɎyۈٖ C?G?w,Xߥb!drׇޠ1RDs $HNÎQ(0&A(PQXjIIJ>Sc ZۥiV4*A]?< >p]`{DGUt8`0D!bsʺox^YW .Ы}?E~g';~R(xy*CSRX/!!i)W',:YeXpiAAuk<Ԃr6@wZYtF,B#+_:78&y+lH>&FЩͲ7T7 HM Φ8!nXoW m;k԰o;ʷvd׍MΆe]bs9-C EIb˴VدI~6yYIYiA h-<8xgR=MUNNWYpj%eg"b(DN%BJ>j`d3d%<gc.-^iM@>z3"}"M!vFn N' ",&|F2``,'2 RxC׹24P!RLW%}/BdBi2r'3(Z%'(%&k.vAUQА_h {05QX`~XS\ OD cr9ͩ.#i&E[˛rkc J 2};zO*B*zo5}k:&PwKG;Y$Āǂ'aԄ̇<@g04Pe(u}KvG#3L*7X6׸H/Xց0&Z1atPdM8¡D9C0ΪN91NW8ǽXRBhIJ,F |uMq@Y)"Dk{dža\ʯ}lqV<S^D%." 9 [l XC_iCd\!r$/ZR~,_-0BM0 @_T-7Cn %TA\rX&G.`mIWbqR.߰RYOP(M1{X" L`"|nD !LkPhza;pvqea?O=Wg>GM_ |~4n/\11~rqt:t{쀓8jV̸sʼnbsqͱp|g5LaǛjOؚQ=%g*~`|??m_dN># a7RPXvи7S؝'o'KK1{k(SIb&9&L@I9WM|99u#LI⽩IuN%8g"@:e ) FU:9#ϛczeieƈghKh6H464BY2#QMX姂#LOw^"G>,1U\!tYn >el#T,Md&7L d.DESHiDgymQ]: Mr;췘z;B/Ю? ʊ45M;~I 5j y;]`XMÏ~`=}fbʐ I 2]^m&ş!Wşwv8Gx?a V"# u˞Qn5jqfĊRjZp"qeélXH2+c s13,$` Y-2bIe4F|dqU\b]ip7hReWWf}IEgD'~渻ܻgo\XF9Zr0 <|f &L #Hȏd7Q^jo8& aFO-^Tٌ8Xmz @Y % wK&wyCod-f5M<}n|X9N}߅P` bPAx bKQ&6QY@#CG{_W7v 탺59^AFuXE]Ƭ$Q KUsM4d:*xTU.pRyAHJo4#/Ef緝+;卬I"XSߤH߸^7PDኴYtЊa]$}``O~ 8=6 p gܖ :II( ld)$h͹5s-5eT)I,Fh_5 ȟW;|.4jiO/z¾j4g{,zCŝ{Vox|ةo-.-;ȹBXd (S`xB)l(ӢDc[XE1́3kqe[7CdzjnnYA뾾+m(gma;NVB̫O{)ASJ$3&2P?*gY.aѢHҀ`J%u蛭-" !,L5:޷V܍yĒ$I]]FWa/o(}=Yv^b ͱ44>bդ5E[3(jkJ rMy8J*ĵ/jEل m`vM%ᐍY!'?=qxzW!#9ѫϿ(Q#NL\RE )qT m3:ƴ62Eeq*LYM–PBIp&L^ Cu;5蟣:-̫19NuCyFVJീXAApp0RHDrƟQ2I#(ZTYGߎe%4&0 uI#Lu1bʻd@0Ep fMwp.d RwEv9ҭYAwwnH&n .{# nCVXk0kW<'Jle#D75)S!64EAtQw{'[3cϛ'9FN8[Ҫtv۟_~zQ˝I=pFr~>?ww~jL<76mn$ȕuq sslI)L3lA!*SfuR(.fX(Á^fJoz~yup@3eUڗ޽7 k.w/U(`!9_pXoϾP~W{EEqKEE:geq}ouU7^_7[}Ni`ْÕ"DCHJKmUL,咂rč*F(Q\AnBb5kV8aKЏ-ViM(HS\d]eM Ś*av>sDhR*Mq‘- *fzpN[Q8 e3-][ eΊg(U-=j(> w J9tH8GqaJ=:2$yx!yxa^hFT9o39+gg"sB"3 jR&SSk$.j!}Œƥ 3 CCĵP(+)%p(Uh'`L(O(-x3-1CK-`)6adPʍRR"ee )*b@z%H'4f}:t:kkai+ u r *RX()UjA)|R0(JaŽJ87b>eeB!"I|"cU"xK奩T.#xas츠dLc$ͤ$E(S)QZf$s83R; b(eaDh*R^f\1K+ ; qg ߈YCb5±hjSL*Vf,3 C8bYz~,@5B# ҩ0^kXcUԦ\֤&=1F%k.Ҏe nfeVgiN EGO֗SpF虇58 lFitqY)5*P߷O⺠_Asy`2b ;'y郞`0 ;FC@nnprI|%؜h<Ġt~Nozf4Vf0!ڛ"h\3#a*ʻc P(O攎랩д o^v5kʼn|ť{b%uy:gxU6:^1٭F+5QkO.ƸB)$%c^ܪ 1yҵQ\ }uQlV!ȘA5f 0Pw [qD;khO^R^ũ~V2DO&;SߙTk*̳5-0'|m)$.9&Yx1,q$UJۡbvN{&qg:M0nބ]~C0۳yGmW]%5yS^lDu؂P"?ӊ"ZxUemI_O+xH_hШBwFVAVbm:شٝqzwWiHE'7EeL0Ə5rU:x6&?+਌ kvyRaNz[cisbSÂbqFFe$rZʘWJTm~7Q(a⳩9Zj)|_: :^YaU]j'??^wﰝ垒HBB"ݶs##*nR"Ԃ$S|B;ɪIPN"䪟 1 ;&7^仟Szĝ/9sF''}AE'0IG'xO0Xy):6Ӵ:\P@r]. )=QޔJySlMƹ1/D Jf$cإZ[()3h/T;2ejWeQ^l)(BEIv*4(AEˆXDE0"q3-)VĀge&I}*$Sp2 Ó R!R*B%PD#"(Bmb֐"pɭ#E1@KRax fFciTck4|<\Iv,%[!"HWD/ ](mĬ%EDH,Ez$`8ʰĚ;1Ŵ$% BcPN2lY]PPQ2pɶU'5( QQtC,05|y:u뀳-K(HaK]-Dp35 Ï9ђ:='U3vjMSĊϝ_eonjP'SCH3Zjd4r J& |v$id $t9Xv^GEӋgІ4SBhvN9a",Y*)e&cT=Z!}ڤ;9%:2 K!QXfq)zX\s+Qjpq45)R!_H eQ.)p%rD ORR bʥ^d af γ+Ϭ4J@4Xй ߷vw%@ӽǺ۲zKaa-8.И.И_*n_muB^;h(ʿJr;))tfc%xB +=15E\m4˯ɘa ,x9l'=cP`$K|YPR# .}߂JDv7?oepR$ƾ fݛ 䘤~I~޵Vn&$ pGIڵ;#@dUrdf.eʇ,i3X)C.M}r5qo5KБ-F|O~Yztq0s?NM+ַ_G^SXҡt29Pwaȸi$ 컝G%McFtzqTXͅ@ +řQL3*N+e(eadBbbnō~ӤuN';n*јu*S՘ڹwXZ7~|rȩrZsh#y[7VeI|?+c +oLJrU:t7#\RmcMB7FݧThue , ,. h66"lsZ1O_;wʨNhά?Zpv:Yn2w^eX/#ӲqQM^cgs Ft47~2pYp" 6V(Ȋƌ`dgo#=DŽi_ݳz AN/L8.^Ƹ֥Iz|!#ߐ,+,D̏([7I,~rE Ut/i,ZngN}?u>V5Y: ?rl /0rBn;.c%_\0ˡ).UNNӼzF=CzoNf-/ .ǃ'ȼ^mOۻvyv}ӯwW V?;W1݌&—W/*7F_o_ݡїxK\#_z{^:_/^2ך/s(|_{;U~Os̹p>/MM;noOw@7 G~p,!?0^e;?ܘR(\khMO[# y벞g!HP<·҄ICMy)ΝN0 `J| K"k'¾) ypqvޙ~| |K fi>ώc.Mg{PZɞ71!^HڏqRZ[1~*ck?cpb?r{X#e"ڏ-ǨDb֓!'>LljtӯJ+%R߭w7^Q-@ PV`!SzTL$SvH1֓} 1όhLd =ft~1;F[~jx`{95G1^E/kV.py$мJ=W(PNa*#6GWwY:mO/_#;c<2ͼ~]p6*lY88 ZC4fT!vJpm7\OhUn כ }ڝImU`jG]2Y7qa)65{&aFjV)ؒɍF0V`nXE"2e=X-be-v}%'E]'d׃!֮o֮HYTɶPJl"p8 10 UZ0q*s0C5oߐ%#.Ȥy g&. LX W%ü_!FBKDm|>牥X|Eh;tGgyi3KBxI(y6! =O$ZiLJYjt_6wGB{jj>\)sK./N-R.LSUٲy]AX-Bit>l:AYZSk 3 }P[8οH Egb1ŵ]ݰ9OXc:*AETcul=[ֱ:ZVǖ9Iu.C<4kK֊Js0J8K2%#.yN gPʱVi&3 ,)Xm?Ncrʼ`nN)cnht `xWMgV+ ɛ弞{ke(N2CrRWװ\ *I)v.Nh7]˾q7̦}x,rkHY$J{M-b+8I$ pPMx yٟ-1!=V **y{歵%[[r[rfڒ-ڒμ'yFI1Jt$2u7AwDby+M֖=g OUlr %+A%Elm֖ǖ~LdkKdkKVKi6aM`Juxjcm,>ؒU&almHd&<+[2F^i%n5b'G\ Mj#u/֩P kivid,y3? DµT80N(L'OTD*Rل.I(FL0`Sj9%:)R9o@#)tQUSJrFBM@[lEhۜ`DٸB" .Btj#ўQNף= !RXKRlԉĜx(CuRjk.W#iמb=ڳL&=wg]SQ c}gxn*9eQ;xψ JiB"F[SF5Z4Q3ĉr hBR6Cވ48![PZ.Ybcf @@}4rGam0H=CqWY" Md0DrTx235By+9 bm lbf^ D? &L_ h't.ǎ1A "f4Jjĉz(IL@hAd%rLIjϻɘR؃VG25 4$~4`/i"輗I6t/8^!PyŃ$Q:[^<{^5adokj ,/wg},: λOL9)r 3vٿ=f:{>渽mg XݹzWP=10C\ YL03xSC.ãTT0T"V|j5'->9eT y0,>KK)MXp 5Ok%T>)>C qu{5(ui5wë1ʫY(wZb- ^ʼn!.-Nt0oYk:3y:}X^1 ,^^$뛱9A1G7rՕcw΋p/u^]^ Wߔ켓?A+kqƔQNx>RW= æWb:gn `[o*"&K d&AS]~0Lsu/0U0-lB+,7/A09]ڽ9)Nx摕9 R6Sb@J֖978BH4L`,PgJM%ԋ kb0Xn`2Mjh^q}#c[168fH{ĭb)GAΆr"65JjaofXRSzY,ւ{cxAYuV9uB`$H E1!A}b.oI^Wg ccL)y &0N֝K&TiL솯F᫢'{5֝H7D#y5o\62l˞)LoZ$Ok 72F>J49tG, (ʣⵒ퓪}2T`5:@t`8kB-g-g=`1Tb.y|;*-$e A(2Xi}Z)e8bK:d־UifIҼK5-OeOlwƈaY,LX]dr 9X3r(BU!Y<+Q11Ws pVO^{f3>É\؃6&_PJ8XQm&g5@EXR.|x#p-v:"V;MQj)RrX·-ւ) њ+A9S† `}zI,`r[ AqPߜut< g-3`ib]js!c ~80<2#X#c8q(*M/$8ɬ ruvQZB` iMFKpQT+z  &N*?bS =,*q*{ P#=B4Ia#2!(I#l 1A,eWl<_\#\а47spg_0F:5[zI3U=HOV*{\p_O]zp _qrCvY~SPFߴSX~j| | =+Ew%j/ Ѓ%9=Z6ص8¤nUHu2ءTN(\"E]Dvn@C'k-Fhfhiv4U[-!ih+zڃ}PSuoTpH1Pp4krE@2 i]ա;&߃wT3 KAp\a.P)~viA =l٢ 8bb(R*dͯZ]U;rʹZS (/ne/>LC+|ٙA 0ζ~A? =u: }'a TÂ5BvVYv&ct~e7아pcJ.,ao9i}]*H1)&/ ٩,pGu@8$th!JPXxkI$ބ*Ij4!:ItYAf. _烔l玽dtՙ'Zfb^dl؆_Fl::͞g5uR74-$ϣ%Tw^xNݔuϛ\d ;J5{Ї<=,2,&M'A 2-DEPK'Q҄l?!UPz4؉k Ks-sdcRMwC\}2W/WKggҹ1 CRfy*5widz$GLk9A^1XX.iSڸZ_~g?^;& QzTd;t1vՋ_$GDIϟeAcg{B_7D7xSAc>rj50d3dt5 ܑ{0`0<Բ*UBk),$DO)X .O< 1}|4^$d3<5HF m~Ĥ6zP}zX󋝺 5b0):x:6`FR1C ǫvR%?RUmˊA7ghs>!Nʠ!犷>v<ҝ~c~3 Rr!HD&'NSIe*/!Z`Lsu_P0ُI0Ѓa`B + 鈐 $46(2l~ Sm#=z>UG~;ɯW 5$V7k]pd ۮl8[/mT}iwo*y6zTY[}Y_'Mpy1ܷ~mu'v듧j< ڝ2\.ZN^5-'rmp$~6f5ʈMu twLtQà#F_q07y rq?ۓ+Zv<;QxFsվs#X_fCKjt&кo@snm˫nŧ8L|Y׵ۮZvCZ+ )AiC}C!ͤlK C$iTrɜF G,FfإDI$fng$|p:E49{vw~ Nv9O1c~MuP~\|n]2mkyU^3OgdzcO4+:l\4_ e}uSћI!:8>;],hя )2ɽ)9]Or]Эͥ~MjY z䞼"M6붛d>2τ5 3ݽ 1_ӧ1GըdK :`eIrX\yc k{D}rS񑁿zܧfk2?(У.2fSgk,{sn֤ /8P{2'/ҸrbnR]+lgyHeU 1(0!$*/QΔ⬒iMdDKQjA?x6xTP^ж:h۶ =M=* hSڅ|.ʧGҭH=cC=A$(A.y^Yꠉmc<@>DdWЭuG[h+Bi+VM\mF\MЭuG[h+w,xЍP>[[4qmn6= hStC>qmS{Ftc[K4rmnG{Sm*ЭmG[h+* 8׋ʴmTI/q?hʴ]O)ڞBjg9Zd5FLqHy>D/ELJ o=iW$R#Zٻfztqu9f`xbwmӞvIYh#ԩXm:!yg{A&P$%8!9YE.N07<5S݀ϗu`w,/. '*t|jҔyV ϧT~cKο35/n"׭2<)\U83Or|/˿/?.x۫9-:[m.y/c"^^Uٓ̿?{CusI}ɴN]s֣N?穮=^< V+NE<ȷS/_g>{ {?>]LK曟ߟ~'<=c?<Ϋ}攤.w_U 5gz!AVvr[[Nl "e(a"HGx;S!Z;RXe  9@mCLC+#%\g;#C(|pN* xޓ[V>׷a6Rݶ[Z+*D՛~J~䑂'Lq p@/RW#t#,C; E/ rƬhI8epJHIw|* ŁZ&$!0L7)U`󤸺kx~LMGun[k693CMCy:qr\/rԺ]e=SsYcc2:džJ;\ؐz%FR@rX"BʜSF[C-4 W\9*4ZB|}˼c !@Q)PCQ%f``Wq"% 6f3Qs(8aȎ #XYżE$O>dN19B(+W֨`_ӻ7(J *Y80eJ:(66wԙ}&^'C -Rj5laE?FjCևMJ#h%WaA<ݽ!r1\](uedg+j`Դ&m&C%|JX‡J+['q紒|&^h1iV/0S|#p1b+ H$$S4u[TW|SNi~JePpY5v`y0t5/Q7]?TmjhQC}Лε=j}nVCddquŏ`Ʈ49&ZRF/&O\[cYz0&TQ(9jWyMi !JbgZ4=5ҼͣD);00A%I sz4`8. h\3 +z1̅aC HFAx,hUC<(X]$8OfMVeqe4`TL8ɯ0sTg;+Xf=ӻIn[Zm V"]D;Tz3vYE0g@ pb;~zAPc;vP3Z'=$DňϷJDkLPIH˂c6&uZSTlNOumP)d 4eH`Z匂hd0[mmvCaNC ѪLqYfY`.ӌdPC`Fh9[%Nf˞z֦"9uD6%Qa9MN+Y cq|_EgMr cxE')dr$+F,5("KKD#ٿ"eCLezOhLb^fgG;n;,Pm9u%^fNЎUTLٛ4 gq-EBctٰX)aJh̝ w&^87yU^M޳B:Ez|׺  .uZXI6k:pĥ8T,m:#S-{K>X: -*h`sSacMZKȃF<@"9-Id^  If:7l4m b="A0}ȂRG⍠Aۀ8^24eSY_ }sN,A+R,GI0wfF__nҳ'"O̼26DnujփFP;Kn|pjx ESmDLt Z˸` 1B@EP P:QƪrVhҵ ҫ0*vYrF &-7%H̓ZLAcE.Il#0u %g UGA`?AL j BCh("Hp9 P6a^gӁgy$+ )bl-'K$[3,> LBԳpg3X6Fh2o^ BLIj/4T2h,]n%L *yIx7k R)'USV^ac p <-_x<˿# _ϫ_˫|suU&^Pނ+P7U 9{F1;iR`ðʿ#M,4 1jVEm~{(CzX N d|lFC9oޔfK#,!A/T.$ds^!7VhNy 5]rx,wT`=BBjߟE) AO k`/f*Uu Ԧ_`+@\̽sp EG 9]!oYY+҇ڡʢ`$ɐbEYPYh:PՖ"Z ӯ^$(UjÂIGDzIFenLFE0S1kl \,&-L^ I310Uc$mOÞđ֠lIEX'j49R2\݁U|&Fe 5]`)1-,0²i3 _XE u4S"p#D>1q7__߭Q[?wؕ^|7#>3Ǽ޻mwwN[{{knW6ӽ5[e?>\䝇v/,1tq%l\s[\hNB|'/?b !EZ@עRq#XBGc[擳ƸQ8y`Oa. J&)mdވj-'!U}+ζ&W&ٻM9TC5)ϡQhϡ1EiϡPr7U/$G|rĿR1l-B $_ ,ĸg'}rf&ӹDR(("(i^5 l?Bj#ℒxɎW$r QPcp8:=%۳30)Swyw&5&eY0)@!1w3}W=ﵤP|ׇ8Y^XHTu+>& `3v}lVYRÜ3N}&G|::ѽ3x|;{:,V@D+1{j^;q5ʧ-ՓzoF%?u{ɯ D $w$c'/sw+~{%=*~h˒ʉb9v"z(lHOA% 6yO*x+_RXi7%tгА7'-MJ@+L.t%x'RKxNZ3ݷhCiDę]@aepR2jWW!+\Y?~|>-/5/?y^~?| V0%z[yhK N=@=ΒL7Lg~P\J!X/rΦY_ԋ)r#zs.ENE9U\/rΦȹI`PecoP=h?4i}fc2g;j.SX&ըQќjdըQPX.U@w.5E5jW&FD+ޫQWFըIQc۫QͫQnkwCpdu?nWyf_^\ݏw#8LSA1?ߊw [o/n+a,ܻ&g?vu!o2zSa1|Z^.ߋsw*m"T%4ŷ9.R\2vk)\Cݦ !Im__lO5;Bg88dcX:mYM'J>ٴ}qPz8(oɎl?v>88 N'iwT s tF3JG.ؽP\(P?xo:^> bfꚵ eY](r I(8 QJ QXoPƕ#G6 aX5Fl͔ ޾li 7RfHlOnaQOn'Ln6-q)EA9~o}sk$S5Şs"ǙlT4/ޣ${,ҎB2!lFQp얤z=y\ՋlF$"ۤEQpf{{0qӋlFtc"(@OYMWd(š~~}.>hyMĤ5M :qy7~k~qQXLM{"_Xζ䧁kh.0B\e"S*8~a1Wb縃e^-csg]&̾<2 Fxfh7L2տ…cq8+ŗL7|GL$B迺U/>GluP?_~VX]PNNNbo:7 jvn Խ:mSmBZ՞y5J/Xp2Lz  1Xn#>6hF! wB'EbZbk~h!=$ϙ#s\b]HdWkwKb6ޫt>o_\WӣvtBSQ@v:ovp @gi (8HKQ(ݯ6 +ncLk\uPp{UGݫRJަbi1 +eD6ĵhup8=0_x̗A'ՑQwlPz2(-_}X-'w'@Osvķ+Rqf/_/9On'r^oSP7}`zևqe.)4T|Ql EZ^eE"*iS1`DLG~YP/N1rERG*/X7^.Ǿчŏ#!:Wvu aB=7Λ֛t<;2m+ &wT鷹)^QUILƓ+Q$x)(Vì/U*t?2SlGP'Dm3m'k%d 2)}r^c3Q![կݙK Dwfә ޙMg 235 bZ3u Q35ꝩtHHl7CL=B fc~Uy=~^+,ʢ0 #x!( , u~Xh/~9wk5Nڨd0NڨAi=[k@Σ zkplw-HSQvRc {YFAa=k6x(P3q3f 3q3f ٛ1 ϽZ'(|7ގ7 r>)et) 0ie "X,[a ~=rnk$mt9w.BJG(~"BVe\AHݗY2TX&3Tr)V!L6/TbN!AXOgtYngBrk,FedFF39抂n. "i  }\ݒ&ָ vϡs‚۞ L j19_ԲiVliԝ|: 0:#L3\m=b$(\YѧZs'Bo-I9R0.ZRFR:OwȮۿj+u's%]C٩-k&s>b.=rb4'Jr% ̲8n[+0 -tqQ&QsB,PD"^uAϽ(R QW>b;a4[Գ}eY7pܥI^|P_j {E"u~B)DֱlW{[T36ea2:/ʺCgB c'~^5vdc9\u͉Ѧ?51&:$)q{ P߰Xj׊φ'N!Bg Wp} `H2Blt't-3i7,'i31o5QQzgndzZ?^%OQ}sA;;m ̘HQ݋so|;jtI7"\d[>eFM_I #B9ݵ>vOܙk1q ~p9*9awd\R@Zm"@.ܑ35܀ѧeS,$Nh|ڄKFn٭Ɵs^8`L ހ-9]H9v^!jA&ё<;JR$~>NDQ8{i/׃X9M?pb Wj;W"&^ay93-ݥXyՉe"!&#W[@uy֦܋U3{-.ُM=t\x/DTd&b"u^'XbvM? IQLdň e{G)P!? N89unQpJ݁cílq J(|Җ`Wc.NL(G0-Ei,/U4< aOvWO7$Rč![Yʱ/8qs"Wj K9[XƔe\KK =$Ds2Us7[&_{9:\23[@]q Q8iieǵZ [ Ae0-65X7KϼkQK@\5%JVa}5SBJA`JRаN$W3(RɩcZ^?> .($@ tԍSlvO݉”3 Ecyubr!k%~dusQ}-zu~Kӭxmy*Fv[.y}ٵ6⮪vg׻< JŧE brc򪷔Θƚȇ/ܐ5E !.׼3o:t <3ExZQ.ހO&f ,Osw(Q]~{~O\dB#FfyYUSSaq_G${W,më Y5cdJQ~k@FwRӉBrn~>kÌXw `q{HE5g>=18d> 6Dq^'g #1% gBO܌MSWm--0|v+ j&D8ǎE|IpPT>6pB4>$R}}<`3p}\hfU+Ƕ(U/}G#kYf.=-dIeZ| @MWZtZ͎=֘պ!Xd[?Di]cVQZptiu}2@OSqW)\x{ƶ')yMkr9׉ ܧ=x9p5ifQEŋ1d͍\1ph@uF;e]Z3u}(?f%xVmEoi%߽tڑ︗N9Rit' $SNpʱ)F l~Z,%]RpOߩ}~'iQ؜$:VF)VZ*qy'״P9H((KL4^UW  M;  KW2+QrPx)+ T!HKD76=gk &$hK6DMgkW ƮOQpjIL;YSR!@4 _Ɠl2yNzr:"1谾\[TO̷;o(.͗r_Wmg94p%7fD<]t'HR媾{@cu |1+DDIIi-@+L\4nreuAoC.Нa<6:{ނFu /|lYL;޲9c71ѠU+`wlݼb\C!~5T&I?gL[mdU[krqIhhLVeS,Kytf>Tsv\ X3j{X^ӿ?+KqY JX2<H5ЕQTT9 ?o AgL{>n n"7S"ujbMGN)vu )U]!dof^>y'MrhΥuD즨B0LJߴ?9|*xOPyI^'F m72 ^= c\_$>0~*k^_ ;@kk"LWkW^Y-DVב!c|GL7`x=xFc^;<\DN/DmvxbbU> >>0~EpϞmN;Ї|GSrk?w7 ikЦ;)Ml}y|эMk:RKQ nd3}*hs+4|ajc / z}L >+_,I wEYc~o^4YzaMIewm͍rjkCOR'3Ig\$ڒ#ɓIRA%Q(Q.s!_h_t/V};ɦ?fWͯ/q w4k8\&l<1/x2x-Tgq4]"jiyyDuQ/||X]=vSff](a>?J帽+[T;u.3EV~nU+tk}N|Fq??zHLz?5yGUNh_cmufݺr/_y5޷[~Kt+oDK pcvαI m7 ⽧IR)@jbjًYu!K,s-!zlōF0+n0+]Xm3@x19Tf쓿Y|5 jAV^:^\>LywUCRQ" i=b<#Frjܻс$zRutPp hI*!³:CC@yr;Ʉ;[S +U5)6CPֻFlU%u>\jn*>%pVhڟlQS]whyYݍk,PGڡdcw2H\6t)j?9ht ?}*5=_='-Fᯯ:xY&svKvVRZҕf<ώNÇE|~Y03˶? &ۛ/;{ps|v1=, 5RS$FGOpDEW@A"NcӐg"6DDDIّg<qAC=WgDk ˔1(?XE"3$4"$bm5pt.D,d`Y/i@)m4v"T@hoFx &$3E`2Ž2* T֠B+bpJd8rgJ. X4%9QiQ|ꌢYٶ 0ݺSr_7v8 U%$Lfx lf}^&Xl1/1xQ*8NP*ruAוOq;DUGMDXĠHP~qZ%-w]Cb>/D M@P7?wۗ8{$߿ri_׃Mx[b%_nry&/Џ L:^xV|Q0jkŏ4h uMG9 DrKB@,ms 2\#s^y`,49%02¦|V lcBR[*bOg n2oq wYͽݘtxIl80jx)]܏/q20w/0 k>O0JKWZE/%"ۭ¸OJz-&OQAiHYbk3ŘIA|,tfM3i.4RQXƆZe zQ=KJsX$"&VA!EHaũ,2.(OC8TCx&;/ma.1 YtD[. mr@8F |Mro >!7. qaȍ Cnal<` 4B cF46( 8N,L O1\q 3O)"to'͇K୫,׍߼!3iB^yY֪_KcMaQ &:^ KCODv1hB:鵻!>-) ͩL6D`%2ð(UZ1|d)IbÙ̐4wZ:>qޑCa%:ym2:"Z0A{FiМx+)zF;\LC&0)5'0a&n1lɑ@ͤ +qZj$|30 u%;iEAb1$R&,_ڥGv#Ɉ&Ppx30%AG/ ڀ.G&I b[ &cl|fٸ|7yK`\=>-nxD/];/>/a`Er0hP 4*@TJ[}1үYjoGe/A^1H e#f$+iIs6~oOalboucma}[rVc|+9~&@_\t9Q `o~Mv`*?tG[cdPQQQBHY? _p ոhzV^qjף.H KSDpSDwh5Q0()Ǜ)h9h)+P#l5WWlU^yǗ\^`zF85ɪ5h^J1T83f_x_@I{xkknEm 8J);&}KиXuᐔVIjDtݑ98 l;888Bh4v0Nq8d^H!D'8V c&<9=ȁv E7Y`Dx8ߞ4c+XiǺ9ltī{ LiS2N< zF`8^r`*J!9RJ|Km|K;x R(6Rݕo^M1t^RKf L0ɌSR`GwL6څ7jB5Js;;qȘcFCvl1j02qq>N1{9jDrQ)N(8d<س ĸN@ F!x;jd|GDq:$bճn1Ūn'ήtW`Z:ɎA1*UapY+UH"HBgeX?`7}op CʁdPa,KNyeC%hdo|o SCf퉡nVN M/$#1/p=Th(AD2-ER)_1XJ+h*(gX!dQQPG_m u`ajZϗ'¼fV~6^\ͯϖ(c/zvy3)//S }LK6,fw,LUZ<]ZӰ{EyLUZ[i9W9^n,I}.^X½JM$i ۫^Z'*/J:zNkU wN+2~ᨷek   ݻԒN%yk w#3u1S'K Nh'jԐbgSLӉ .PwjԔhCxLŚO$Lk&X J9Jk0pjP0)Sn' WET VbZ5dw\ kBHySj?V=ykFjE.dLA ajM2Z 2H.4;W:E%P1@eAMp$Tqe|v;ߕ{$kQ0J{ P! @(Fn](1hc"W|j|ḍǼn&|ګ|Qߍ8~6g^덏 췄X0tp/kɇ>%.(8 EI\p,s /1:hg;0XJj=դdVhZVjVbK;K\rfz_pHQ)(x3,NtVW1\!ZCH AP\nZRJd i+Oų8SHZHnH@JƷ% c3j%= #-q\|P?:yQ jU{|tvw"8rd/1ʇb_W<~@x8wJߛ;" 6_{SGY]ބC05 II>l#L{Mi;|Lzxs#K.JtDz䒅G'~]PpZ"xN9acSIePk\ZAHr iQ˴dnk@JW^:$2S%\DK q.v/0`RDrB Ap Mi0yZC#).,V;T\K IQgD[]:!X[ 6S)6.Wh$'PC4&ǖR`Y0keB(uL1GKUO'ZZtyJe}KVoW۽v]t$TDjOlIpLʖ$mD&Iw!߹&销A֍[ bT'&m8߶´FӷuKhuBCsM)tu[ bT'&m'`JP>r&AXmo[w>$֎={ᗟɧe@CNl)]$$ByZRftR %By{ 08$LE-9 1()K-i[XX>p[Xt)1,&’}{`b4): Z6sv!h<3 Hxo-<"K'(ZbW'D{u1YsCgF r#ݍf~/y4Gvָ:z/&h_wgCh^WV|_VӇ:~?/KsNw~QqpꔶL 2b#ICY&`q&zoFWϕ).l{Q޿X`#e3eeb| 1/LrsYyw`.2" eSD2HH7*\'@0c6>|v3?6&i A]e(R? mzvjY@^"|[9a(r*M*hvb^R7չ{[hQQB @Nεb:-hbZ]0  al_1 cR@%hQ~c$u=fՎZW '^" c׉+6Ob@-<˾v_-Ԝ&5]׸uuF=;>|o 6tާW:xR4v>\iբK+ցxKN[RǃMZ@+jHg@7zJL k ߏդ8m S(=Aqm:K ~_R-~/hq)`dГXd@ﴯ OyDN;fBpU=~BeS2zn#HB>Ny]BeOIGQ;Χ 4x铜}1FIVi 2KkL:hu Z'5z!<&d-OV& +Ԡ?\;2vBN{Ǒsf22s9n%c%0Rw:7Sk H2$y6~4Ԯ ͨ"{.UFeuÏ7gK0"_V.cjd˧Vg>7*B7%נ*2| UЦ)$+xz,/}蛕h/PpHm]†@y(_W:a?:Z c]/VWU?K:U=ǧjiNW%?K^4+'˗ ~ɭ ^^ \$&~3?&DW᷸X9CO;7zd~G.<_upoOPlzX]>+Q*Y$} sS;Jb$v9Vq!(Zǥ/ZA'w 7 T$[mäxrovw36j.5jɕX0q Q|4J3wv>,^,o./qpBQ?~#N,vofzIJd]p?1nWEyo]%ҟŻa9bVI;s^2%Z[kht%)9A餖"[l27?vnpXոȜf qX0*e\^ ޽qV>j[s'ήsv3^bJ72uϫKT93YZBSXYhfCsF  /hXЋU^ lhi˚AI )ccII ^8Upb#/}okÈ %R)@?,yEPkCU&wŷ~' !Ψ&ǰVϩ pEdQz 1"BTti3A f6Vk|T @.b^K'pwj+5r*S4U-Ec@Ә<-?{W׭J/̝I~l`Hǹ/\%hd>n-GR5yb^ZCV}ŪbUbSbjT[=%_aO"AB4g @[*LrF|HR`5Fe;ݕ[v*A쭏닫<Q+ 7FPXDPIFY=,')BĖjFJQkrYR#$j&5!D<x@L^RNHKߨo`FD6ԡD.Vs=%)" SnTqZp[2q *aF3"q{4F (.I4C%d^tH4-1#FFV$:EoLO9]0z@$81h.ʳ: ,)h>w(gѡk4N: <-^lOI1SҪ'+()):mMcZEoO( qߧۚN5kSiV>oa8rTyXīqV+yF#V6I]!ˏC;ge},T$[1@ %Ӗ9%M.^rӟ2 UPrYWغ+EުWvQٺB*؜yA w㹻&"]]ZN y򪒐Ȧyi)3o[<~K_OVEdˏr΢_Bւ% SK ON.u'?],0M0j.pdLF'P8vbz%eKHAz!z͕ Ft[Ҋ͈Xڌ%ې!)-CѮ̻*Gk}6*Z°ƑwmX[.GV/cmn,Ft9zpf2@T P}#USQ{y%;VX|]FT&Ao8ùQ#.?<QR2m*RFlxMYNKjWK`mJy"9n33B@丕ͻ'͌0I~An.bvr.."ձB7Ґη5jw(.âC:L.Fv-w> 2ٵ1_G[gΠh}2g}GVGP2hUW̡9R'}m+dg:xrLh&d]21y VLtGG(U[ԛe]>nf..Fe L6`bD ^ ~j9P?Ƙq} c_oBf㝶Md-:AS2͓16vîi`\k=6Eޛ@+iz>r? B@6JF"팊#uD2$p+P'QiymF<eqۯiIm40˃p\SA:XZS+yO·!@VOR {}ܔDGk^ h%7$DyTu&7K'\V y}81( GLGMRD-"@"eLT%A3ƣVY^@EoФoȮqjuHQZZ Z)ɉ@VGL\X9YO28&L$ 8%h0BGl(郇dF@AJh4:5ke2R҆~\u#&4ֲF+cEԎl*u:!EeFe$HJLQq4xN@6@#)$F=ZctA5MvUrGwyz҆M^u:43vDB݌I!\LN\:"z2 hOĒ\fc!.opv-xrptFyB_y<g [ur$\,wlW|b}.VJW?Z= D/y|# N x8/_M|~X6N']n~Ȟ~a;ZC$'&~-F䫝yf*ĉ1et?nj~X/rM[ݕbպG.?~u|.sތ!su&Eao˕nkx8Bym)w/3L2t7T$F8zbaG&N #}/]EH %Z]K\hK R%4*Dh-=Y] 5'zs:DFjXэvn8lk~,-@ G Ϙ: M ç'Cx˩ )TŪjQݪ )6[;PVz)):suNPIIGJ 0T0+lk9_uNV\<[Dj*p99S*?_EǶ<2~t" W-^cۭ\ AqI!01%q%#@^nIp*V5J&7^5%=58_rQ'1L.&ſs\r$>xf^bgoݛ˩d_˹WRuM|Q/xB3xė>D| _VP%OxtF-Cėc]jx/DDe: ?}{~~r(4@᭝Kz_WVA"d%$ a4\ 4S Ha4"EA20 !i IDQ rHFwƖܼ\ cZmXK㚴IK 5=Ty퀉xtStjS"Gw(ZC2䖆e:Vc`tL/=Ln`ߥB==5R> 5FuVs(XjL˝ 'rP A.tG. Tn@YkJL7N$мCjFS ԌhIIc玤jtz[lmO#68/[pпM縨VDz~?-NxrrUzjźi~3ħ7箆@$Ou3yWWo?\9,~a#N+ϗ QXHQM|gS3,Fkq۵^mƹ[W`q;҂нf 6^SO,AmZ: :YV`t"܉@iHP6 ao#j7V\~ uZ;b7C}Zr]CXn8nM_8_y`Ep#f{n5 !$sGQ!A+iNڬ 1IM2~jRbS.BD^M<{-*="0DiqCr5WTO.=Z\"^bc8s2F3cr&B"]9?o?f,uH>01ѻf4TjEȕ,w RȈR0MnN/bm}Vod!݋GN qGSU0f/cr0f/cQkiJ' IBS-':PO*~6w{N4M@-lҕO­jIDo'}t'Y_rS ?9'ؘ`r'Zs&Å1˥S#-Wzv/ٽgk٦4;*b2\PІ ΘŌE\;Q4N4k"EṠmo.;/D:TX;@MRJ ,Q[$b]T[+&iH $9зnݵ ?Jk_kHitrٍ%);%%b huLᐇ%#LA=5<wh$FJ [wH)`{jY#EK>Fۋ^=.B!({k5Zy":`4,[p  rw V Ԭ=l;i1ؘeEY1..:F`C4ؠ72;mN6?%;t+(.#HA(EW] PX0W5=b m%#$݁U dN5 ^ ;WtfHxfL~E_ոv g_;̴6-TEϭJ0lñ?iILE55"2i~K@ZtAnԯ3ʠ9\ݎՔ@Â.!r /elg?b=#иp3Ɓ7x稓FǝL6QA +b3Waaka:dE܃=^$IZWf* ]\Q];QUEuۋ↫N*ޕf`Re-^ֲ{M-۔f=Wr 1< ,\ZQ{JKsDU\;ҬoNsb`῜Ԕ%lr*M\9Za&0 qF{"y#H@5U%MN;"irrQ KZ\Pːhh7HH-/lffڸFJrWwRɒdgur4ޕ57ˆCzn;;vOvԩEjxwcfR(=="/*3QřQh,0(W 8D̴$Eکʻ2T c2F2D~{>Ihj6\vS`l)H=FR  HOZ q8C=DGbtߓ*Sh6#-$OFUF0jY{J⃤ylZ.ZаєJY0 aq%5P3`IO)X몀Y QrCh%-cVb׀Q(ZpOyhqu)m al" (-m~Ó~,P< QXA+y"ky!ՏDDpVG?n |j2OoII"p,AQz8O5ԈJLIJo% ~8m~5BSiX)0O[` @7M`1y)KQrCR" A"{C ]&KՐ;@&৬))ƘdT LОomY) 6ȾraـY)`FAZ:Rd41 ؏pLz#0A,},>֧)_ՙVdqw8`f 0Є`8UFy'X]Y s#ܑ!| Vu , "b+OmPgl#hӆ:-jA 8Z `'XL5ΆP &x !SigTʶ[Q\TW4]s1Җ(]E |~g2aB< [֢skA=H6< FL|h G n=F` [N5踡KsVQ70ʫ#Z2MuG0O 2U^Ob2H0ZN<:\QA)Lc r91t-%m4ӽƬPcȎg6a830?x,?m^H a+L%c/TD(? ey^| -O_57͓G + e-G_>;pz\WֺO-u_iw10zʤPt7dT8!$2bLd-ZG/8u&cXeZy1lØcCyAPԣKz4Zh!,'5X*rE+pyxxgk3eOqi0_2A#Θ5!, -{pƗ| Gg%3/)Δƣ[TBlla0myo;I4ۡJ`Wx;tܝp7bP "Z`я9-AЁ|\o~*7z6W4xW_ǡnMV [LVpeFCw7J 'Hxw۰dM%]+{scKCS'x+C߃J _H5bĦX >Y 34 kL.Tlտ<@n)>T!9ヷ^a1&fL!XQ V  cb4+46HZخDjnRO4߃ފ\IXoٌO6T%ͮam$O )V@65v12b 肏&977QcH{-`djt ysqsrIEВ6_In/nq)vq0 .M.n-zg7[M`:[c{_ޯg_4}ƺi@שq? +X}xHoO> gO Sr5;D{-=0LaYrjIBEtW&S 0"J>)i_HnL`6ͅ.fhzi~3bW<*Wg $5;k&W7 &x nvDЅEF4vo#? ~U f2('eeIDJm@xUHsCtGb1ށ} }jKz6ZrVÖ|? E@K c *]E+yF*SɌIJ3#͸cLy!2;#Xr)-snwt~JkY/x.ᅳ9iv* gxε4N!'QY*:mFl2)Ugc b:;22GJm;;abF#-.EH}Leaj*9ׁ,Hmʼn apIS% ;%KWM(kgM{'@),Òq)f:)1bM3ZphMmduG4&2'{搔H\ R!>TFO^A9%> 4VKA?TĖ8Jq^7VfXwĪ-&*_WNraծWNbSW>xK9\IfQmR+]|[SU9T^NiM0⤧]2{U2riu^DV촾w˻JqZJs6KL#@B+z^šЊEx6EZpk Y<+Tپ,;OM؆MGhpZ%]\JK׍PvLzLN#j"LZcxN?w6iMpk*b󋮯]m'pw7ZȊt݈Qh/fe *J(n;Ң;Ios7PELƢVf("dz{ X2ꈌ%^z.0D2d)Ot7:P/;&.Κȷ0^P%>8fv=RO d0KA0ZYӷs߾ H1L#d(nIp4PƢzc =`}@@C==.,c8?QE,QJR1@0+\H;kdSiѺEloԾql2wK0N%Թ<߉4(SaqtD8Kc`Q Bgl KǞ2 `:cJFҀV08/:HHc߾p90)h$Fb< /"J%+q_/b{+| G]Y$uQ*ĤFd1xxʺV)][o#7+_.y 0I مA6H$; O$mIZ"="c$Xd &W-D! 67KPkRpx]ytk kZ=?9^| ]{ %}] V`% µlFi@FlM4'B1ʥu~rqNGU-fo,;]tډG@L4RjnIFG ym}ONJT\v?R ҟj5 R=%@PmQ~/'IE0T|mN9ʈa~ka_U+[=r{ȑT%~ywƊJ)C,BAhC8ba>*Яl@45CKV?h|po07L+z@1ZpOd` z`b͛)]9G8A_bS,9v181B|]^@a@|̟8~S~YTh_&ӡjՍ* 㝷?ofo?fh ݴEwkkYKämN73qÍ~v^;33Q{5>V1]D֒YI0&=S-l?׹lw j;ݛꨫmq>BEl6mXW`~ա k9zsk~cDhc`^dp߳Ӹ쮨ݯ?4_҈Ӭ}B[v+}vcþژ]G鵽ź+dl'6սXZ;l& 6y>CZ܄/یbZvlp|r|B^TuL'l܍F)+en:YF}#޾'8\{ qkaZ-dZnT @׷Jz*[5=&mo\ag_V,nSSŽV)6i5=ҹH ,m |<z>]hp#i&k7^6_$Ι{l9 UJQ7[Ci4W}m{N4NBB}#˷={sڞ7~:) _W&/HMzFc"iDFNMثY=έY59u/XVt~ U@3[^>OG3GqϘMx=v>!D2]Yl&p#V8d"ONR%u qHUP EB@ei&gU(XQq0G"*2.N3L[T8dˑcɖ",C.y۔zݻ 0j9o[^S=O=xMאo8ms|ts:w5:Q̨$,<8^=<$nx n~(Tǿc^0wFZNہk*@CE!/n  aqrf[:v> ֡ :ބ׎eU yt ;NhB̾XdC砏1dȊ@k9>XKvJ JtONrkTk|0:֔ZtTA2)UwL?nվUCd3Б%hȉcNO=jLN5!2"q6L&P k(ԡl4Q܄4GLulɂLJ'cƏgN!Q[aMyAûkOg{Fg{?28pxØ]nĉ 6 `iph`'UJ(r0ek`$?ީOmgpNi2@b%rNk` 0GRnr,`%5|˿Ovu5jIs&k$dS@N:@&M9BoMrBPcZA:Z~R@No!-iX쩆-'YS~ܣ6Tvf< {63 g639hx.DdH޺jI~~rܿ )`$&%39@A;.i}6EG!ۚ٪ |e[KER/x+6!؛YNX goO?O{檥dΙ>=Cc٫?oEJDzwO B`[Nn*)f hznތ~F2M}Vqlگc5eI bl6~/-jxױ0c⫊⫊⫊:ślL%YB.:xɩո.YB*)Jvj4gwч*'S7^`\ h(K[`t#j ⿖6GO(FM(Wh]U=&ɢpk +#ᦒ^]\¼w ^V,_> jc<+=B؉Ƃ,l*W%yoR~WX$`p߯??~okVn=?wѤO/X}peQTp*K[0×f*$J#(SJ6%_'o#!Z&TչeZ8>6e c{7iXo?x o.*+ OIu{4KB=1j ~=T_ٽŒK rW]9.h/raG Uj"꼞 tT5m*y\=]5rY[&n ]@k`ѯK:R:#Q>PD MvVY$)Jsda%EV)Έ/4( APA'-%\' V=j6$jaCA\k,Q L J)KpaN:^+|}XX;hrD #S'cAQ2/ 9k+9Je`Y9|@0~pEEIZxku;(JRؒ',"0A@B2mVBF1E.`u4H:E( 0#1ؿjXPO+E\βHsoN<ᩏe NsnkU - ZX.*Gb`$elYQX\9C d"'0)hI4CFfHAcN[V^zuX#B]ucHҁ쟄 vnõ"/3 ï `* ?:x cOq&\ &S$ObqW/Շړb>`Фs'e±DK t1 Q0\"*{R ZZ8?3hjEJXcO<_+pZ%=fQ$;@b׏_ $z(_\?un~1lb"ICw/ICiAρAgS/T_L=[/ICib*w/R_ŵ0S(FkDcZ%+RMJ;g%Ԕ9)8a _ h_Qlb;< L8< &/Rjaj]eA0 MDG2[}"<{뒖O~hMAH %C?~뇋bU ݅\ߣ}ɁOq{*%B<2\_VN5{-vZWZ#Z `^3!ĩOB%$ԟ>נ&Th={'JH{zaJ<^{T۽Yn׵\쪺v|y4.'Us?6Uwb]:ʱ9(bvwF٦)cϋۅ~x㫢<BXnŲ7Nũn_-&W? Vn6S>EFb-~q$E4D(kDm3u Š脶quK.8mɺZr"$S:\kZ7-D[(MD'c[,D;n-ߓ۱n~MBA mnTά[xICCB\DdJZ7!Y[(MD'c[x pZnۺ'5n1$E4H(}o׺deA m.n[qeg-["; uz3ftcJ;F9TrWf彙_LKx.7m>xµM@ߤX]*@ ?1dL2H 2fN1k9+^eHK^XWAT҂M4`#.\(F̈́eu p @a+;˴q6}i%j9]|~:^fkx rq~؜i7ќ\7K# *9޾p@??^n0^tG;1֠c&K9"[J m}I/BƥkwŦW1mIB [S^CH 1VB {Rs0~ K,;@Fɹ҃SըiN.Kfa)+Kmy5c’sX`e)(x)W95 C+EQS0ye+R8DORY)qim0BabU!(*,F\6%J Va!M!uQT*5JB!X03'j@DHKІvYbRXi XB .V- !-p;+`V?{䶍l~j*e6]q|8Q^~Q4C8Suэ~$ƺkGJPRW C C+?Yꬸ4J,'*)# , r$فV 뗐$ddՎH)֋ ,"!o($ٯ#/X@B, ~,BݫWh9[-8Coyxڎ Z̥K7 ӍWx9pS,ZigmsGzdUT4Ԯ8 fLn^X7f}=z̍wj(dG82?y~LiP.f46jhtFh]9%37}` zk=,lZ8vauR'Q%"5nogӵBS΃Px6l*W/A-f4.5۽ɧzN>_Abe].[}-Aͻ_+/֗x8YI+oCK#I%ǀg|>#WUփ. =9Yi?nf4;dAYP 1.Lݪy5.l1H ^FmZeq⬂ﳾ9:)*%Mp+Pv\n-W~,pp|c;8LhWSOuWD5PbrjV7]x'M96Y%#>rOŇ{|>1?+CC"*8*qmyCVrX$fuZaMB/,zQMVBxljFE3a2Fїjq?B?1><&ՈkfEeKxf$xo?d eƔ|]=y.{<2)vgra/ϢL57(gߑ\i8M}ǕsKGuN}{ļߟKxhor+ <xE9ASt\!]e[enSOkP(8E:n_M+cY\WFt\F=4؋ SftJ/4 CK1jQMjfm_*N|"0{ajwƷxb=K\4Y*7B<7$XS(** %6&Ʊt%xi!3w^]&i =g} Qb @@MݛWrVRp.M-'Oej#0!1`:넡;ⴄ&1" DSٯ#VkB;q15e)#+t\4#8b-}9b',OL%Vi I̴D`cI! NG @Q4!%h ovd{Zk/s@΁x8->3D%UH&/WBWu05뫣d2 /9wIz%|upA~l-_#Obb1f wKxL׳j2k L2+֫0NM}ABW_GD0N$~ T=޾Dx'SV9$BcO#&%;i +Re\x ; & H"EeQ[BF\C&6Ƹ됫`ԼQYٔr26mb'=MߙWL fU0ff1 !F;izBIXoiyk C{4-i@D`=[ )oۼ%#_"?s{ +u?[^gB:q>WċXMXVUV4WFYd2 %L6lU\*k [6Q{Ӛ \97#86x8 9G 1*3+]0h$vCb(C|gA&"_3eBTa~7=,jلKiw|2 ;2gOr.M[j*l!MSnGT=W]|;&YZqiQv“ћz0ߣ'_لU2He z$6]LgȖ W6{7@"DiƔ?(]PߣҮPK) ]I;͐|,^;gng22 Qofdg(ac jT =KZ Q7Z7+ط_en5\a'>Qv PP='!әb`&+ҿX@Z+.PB[lpzXtY|{ob྇TMRᓭ9}kjh<٫[{IΛ8'7Z/6zzk  OlL.l F!e7nLJQ[n0#Co?ݘd#(iGU+/)ErnIb:&NXkHX1xQC [KU)h2kYTcK8ƢBu~ ؖY{w3s U꬝gvV=rcg/'9m֙rB5ioJA=nI'BB<ݻ#%m0Jd-樶sUIsLd:8m'Lbĉp5TQ!8xʹvM+&bEjO\ НN 0x4Qᄟ)4TXs)zb~zIUȫ WA"Y!FЀ)%/cp@%83DA$2-U!MԨwHӇb9&CJ dS%҄SL  q SX:pDBR  [ǶxC&BEp'&t6ZsLZ#L+9J+4tZ4sR3HYjizdc^'a:OYؘ 3 $B`ޢ %1Җ*, ~2 ތ ~@jAGcDذl޺V?w0-B!9o=lCmR秅`w;!iDz[iG!".(UD"̇FLWKʫTꭻ2*Ordgw@}bɻnSujFIO5Y?<}?O=&GoKӱxݷkCsVn_ݻ'&au8 EĴN};odmz5`:FJSp:45:z0w@E8i9*NԊi-2p(v3EK ƒv=:fʎzHsiƣk08 l`0mE;82wxBH8L7x6-ބ7&D%cLD8LNw 7rVCMW=)29廡!HAB=Sb! l093|Xz, h>,T<%pNT_6z3"fq*"}`|,,%mv .Ai.{|9i3 K=_=,(rsoE4VoNwƽ^&v\}-x8Hhn%&y0i:}lB;*& ֠=?G5goYMb+b6KKLN Q*?y:gk& ;NV qSyU(<.F(i-A!1D-m۪~=|b z31(c"),?[4db9HucUJ8C8ONYSTVX{2FјgN*ɑЏ"M4 ҼЯ$-5=rJZ~KHNM'b) !CF!A G`iĐ1R ,I,KcV) iց! iQ y'KRȉp@V=Pω*IF$Vy! xvb( A@IF/)3 !HCpF8,=Dr.OI WW2~"᲋J2Jۮ\xCȣ8c|YE~:?fͽz&^Kcvib^FYQyJW)X>C95!OU@Dj6Oaoo_iHV5S{tZS[SbGu0*pYJ]~|ʏR|-nI`U)OVիR{~2ZJQ (,Z+{sq_p+e/݁@89 ]Zu2mlQbXv!`)aܫ<Ֆ[UB ԯ,vw,P(9Q/0]Jgwh@4G_۰vHn2RcA2M|%tB$qwmmXi J:L^­-M(Q(g=&MD/:&[h88x~ Gs7;T+$Y0Je{Dm# ; zGWMyK҉#p9ݙD>I|yǗG=I!N6AeZD d`՚ٞwl|UKy]qZHM Ox_NYIY.El`ge.c; H;.\~07.ӸLJ>S$<:ͳ K7L9X>˂ W{D8}hRg帧ZKR%?%z\>U&t&ᦉ>Cŀ&`CZMMaqWjA@-B锰y*.L'>^Q1 {wz3좾o4ʝ D^/^Q=HcZG /vd$X4E6{EmѠH<cne=ٟ j SQ뀊ARpߗ)3g ?}5*/?l*{vB:ԍY$[8(˳)oFM%}!*jhv8֪cv.`&Qg"/+ i`x^8p,בUPݒLweU ue٬Һ|1'#.HgneU Rai \5%njoDvX8E-$@LXĤ+>.K7yPJ+U+VXs:XJZ;aT.YpX|SnAA"eEq?bɩk4(ħGO;V :n'as9 q4i8NEnv7XNr=whRR|LW\`C Q'͑HL6Gc.w:cm^鋮?J bcϳX&oy6lVV_~d^kb]-\ykp0MPG{ D'DZŋ Xh(BH複CD!paNg1N!YJS1B96_}RP)ﴱ FK?5SAB?XU5v/+Y0A;6a5$g-jo{;jRo"ƱR]7uxBuyCA|k5uqkq_+tM¿3o>>8g}<3'w8WOTLZa3}PljkrB[c}tY~Y$:M[OȾ,GAdRm|2D"j|U{7FW:`\&׎cr8&׎cr\[  Ԓp0)xn(8 -$Z3K ڀAc5 xԿ:j0 ~r!>RplЄb1)}v!cN^nyt-$)A;$Ul\pH(cHu!iД=)65 !'gO]lS˸bq+1W㊪) uRB)<iA1ͽRT* 2RNr%yGckd߳M_~ + 镏$qdMy$q^BuDK/pH W9HfLHKmw^[t4gF+P ՠ=/%Ƞ?=h8*Bx[\ xz,|tVnpRHq=ynS׳P|(>Rrẑ'7CgBL.7 Pt0(a%!>Cr+Ԙ ^wŇAq盌~ Fvz N!Y$kJAhU$h zwzkՁ`"vY\-?o 53a u:MI&dw҂Q9Vr¤6Qo߆|x}u!_+Ui^ quCp_⥞??EULlgv?4 +x9}1̅Mjf`k^5j6;p1;A|2-t%2Jz䭅odLlyX03샚&\P&+6`Hl3qozjW1FxeӼi5 @~_Z7˦`] 탷^aiASq/XZb0&ύʃfnG+b7/`Z\Y;YS@v7Ą K7 g񝜭YS(#]a6SN#!^{ó:-z*6\HQb7lpP`a\IY߹"F?B1 ؎@-Hs¿_O}i-nw%zk}$  s H2Kf&*͉2%B; |,<4sTh~ o3.ĉS !h",BTÔPgVzA'CČA y@AH+-7_q87!Ax1$H@E f#ˁvti)̨ mCGsfW*.ʠY3Es%clryz!Å)/yBHln]ł)WH!,$QiMΧVr2d0a\ ՚+(D<1%yPHYg%RKe#=N Ej4}2v5nQ,`sit!AL &RSE( (Jլ`j77R>? .hyH[ nsGD213̘yfBRKL D a9FS"V_P !ε1XAcebdįg`tb#"D0+'ӏ-.jrɰЦ_skl+JWH>yʲQK'e*ғUS7+Z[=Sa;D_O3q0=pfW;{xuW$E w/P:.s,{_*𥥎i$uI+!4&`a$^! Ot7|+$!Mш+=x/ewPk7 ]|N;?PETYHo'8+VF((Aݛ<ĚfIH\] pz@_񒞜Q>;RG F93v%ٻ޸#WZ]m@/kv NG,%s(9JCxBNp#}kwꇣk& }:B } h Ne7q1Ԣ> \Vdl5WcyXne<:ﮋv=VέeNe./luy߭zCI5b*60/y6h >7(ξճ~!ݐz.E{j^"-a?/k|jAetq%Ln[1_6T 2qrDۓb*鄿> ֍MDMiW{guV] .sfNF}sb6rfcVPou!AGTPc͗_ ݚot=MGhVz#? t`/hǛYpCL1(FPVI}sF#ܘ'v i١]cThM|AxtFʙQJϊӚʹlgu})6M5o4`d'Mr}OUoNr1"w@'usWHn7CM0THn&}:0+rRq 9-ǠLJs`YnU7HZ&N;x UvzCܗwvjfj~NX3eLyyQpjq7a'k>N'7VJe2P^a}ب,>IBT|NJkCo[֌X8V+؄Y*aUs@4e8b$9ۘӦ\c#.9TI20oc,0DbHBVbZ>loSory~p2~AtQtWEy"lg:JY|g)=Ҋ߈yy^'ӏ럟՞Ef%~ ˏ~Av->!'`όspNf;K2{~gK˭`oK##`٥[ެ?KZK۫ёʢ9?x_?_]JeWV+kIrv%1[g5ݷ&Jm FŒem3xI(2%攵| 7Tf KDI qGiJe@b3bvh SV|uX++kȖiQ(y" 253je 8ѥǫƥ4AO wF?5Tm7CI_7 KH+`p>90 sx1]ˏ.)Gd1,gJ;:,`fˮ+FQ.Yra_|1;lW_?ZL9Yfs%)J$s2Y@Ee$ws7q@aN C|[T /6ɹUo|2+J|~C)b KlR܁`|k* N}m~Dÿf^mI5 \Agӥ@R¨>l0bA˷hRc?|MxOp$y3q6ϴ ,<3cVs>l3`Ԍa*~uFQa.oؘ$VJ i]И9 $7!ହ ?{͂ޗwG6|O!vNSbH]Q`ajW]!*8^BfL!|T-Jҝs߀6b$S0v?w;|*Ɗq/blfuO-?>|ܚc6|Bʪ=N8&dC >|[8-UV@v5]Oe[hoLPOO$}6#hG-VF¾lA+Ha஢Sxt0 Ttj?w7]ye!=LNʍ?#~I#i)=܎1J8aΙL%K?b>y} oDžld>Ϻ'zӭHnx5}{5G H݄V A0Lxp;SbX:f4sL{pcTǞ֭E^zQtES6gũ< e!.rw*DNdyTQHH`Yq]?iV\׏?])j0Nn zۋE-W]l"M!I1*֘Wn\k–?ܛSn :j͟==1-/*sFGTOaJ25P{g#zSɦ`E BVZrd<1JrΞB^m^;lx,Tj騘!Oޑ$`` 䬳$RLŝЎ@NFAAeOs:sϬ {cvRy!mqtgֹ#u4=!4[&TB^m ;ɉ7;I5'WY1{5K:rK_OAV,_tE{1&hJTe"T4*kr%)n|؊^}jV3t[zWVV9e,{n]-7VxQ4j,ܽԑuZ[t[%*qiF-E18<" vrn[g2(ŜVV` >, s}JOR (pBYA2Ww!en&vŒ9WoI)b>Y4S'˜;b& blAH5>2d"/ɺrseC8p )3::Cw( <+nSV b8sƢi*6XvŘ䵱v ef 7 X`Vnݳ%۾ +k V ٗJ"RUE$v}|3[%YA^L,bǜDFE>G!S0x@^ϲ5Dq=VS˘}uػc, !}Y.t![qc>b|nbHK8:̓^9ۑxg^<3:qKp]bgK^IRHq|QZ=vɉ;Ń:tbQߋ|cb& Ny\Nr5cIEOz|Fc{=Q9L=~kҳ#q6]ށL_J9g=~f;1HuVBոqYpQE9 0\4CqJ3, sk/m2 Yz\v *Ҋ6guPY]?dGarFmmGR/%C1TܛeC >j"mjafմ C:cGI^0;zlS )mOКT uWy Zl໘ V^×vZkŎ9 k(}//\avz6wR~J Jҟ -( O+ )T.ƈ-l qhL]+Up'A`ݗZc/M͂wn^=! W7]'{93K)N@!y M'w? t4o:bw\*k3H śYL(omg3:6HDD@4DH$e \MR"Z&tNG%5mS//׏_UGl`jBE kWI+k£aF VSH#QJ.\JV7m(6#!M5Mw^p0eF/é *%' *IVD'.ƚ V"iQu%emy3EPP)9i(vO^屟x 0qG9-w5`$dGA0򊔴?7"V31hT?B^EwrnU@klc> UYZKx8[jpz;2#p0\{o]r`y r Ao-H^ ҹݨA-q_mK&Ѭ`d  85 wJ hT$Fh TH6p|]Jz3X:6(**$p|dL$E!') (*%ϗ$,|YS*'5S$IQgYNQ'3 [PSЈm CUVE RJ")V;n ޡ$ #Q= RQȄ&j :!g v)mMG$׏n.\(w?։%t&WٗNB+BYet4*L";qY ^t:L63"DP2T. 0b j78a6R*%@)gC: 4v\ atƎSd]ĭ_䶎w:zuTZqPiYCmEfЬL΃TTўzBQ@+MU.z4࣠:mG ~:gj(j\"p}ySt P[,~Ez>߭G^ǨS4s<{y&6?TF17<6?u6A8n3/uWcٿ뜞;!4]ήog_g(ߏ/)S#+~Ox,_>1p*c>ٽv\㪫Tu!Q ܋|v^Rv__(Ǡ>gptlC@U4Reeq }m $8'j;y~q%ś Db̋! }q@>Y9x̏gg

-in},[(n&1DR(+DЈ#3m "KXλ@5oLS/zsQ[/(#: Jo¡^hj cyй"&%(W)>)8g <_D)Mw1~^cHJyZ-^^}@idoƯ8bVl@k9~/CjΛOo߿#܌?L.: hAK3 ZFڀHh,N"9OZ 5VG߀(sgà ΧYA!&{{1 O>ޝh΀{'*aGn6![]s;-M倶PR<՛uLZA |'EZ(3xem %!DJX$yr U˟hs aEu:\)w]pM̀١)u<=A[3b.aUá\zӊ;l}v>ͽ^Oi<z g:PJ nfs_}h7K4{+j)fwꏻ 9bW ,%p,@)בSB TMtKD4\65rE~.H}cjARǰJu`#29H>A*oH<1HGVZ|mo'DEd#q~ U- jcۿw3V&3gxKءjeJ+xE!03yF>*͓RJdM>9 k4yhKRM+\4_5L3ѕ7?d8S@zq-IN=׭V#ÝgU`ȗ΄5dp36g\Z*)3tj#.w=ytJԂE0uiDK{)Kᬯ%gN,0|[{%rP]?XhmvסiX8H% ZO 7MXt8sFhO#EQuӲ1Ni@p&nK656HIFJM20$峚V(ZQS3W fpA =;Eh Q?º0TC{%\'}Lm:pm/k~ >{B$$ôbȆ^==w/c .][_ضKGk֡6\f#J |$x^7ҥ~O(E|-FsL*C#{v:YJOix~^ćzRy_9wb SA΀Ws U e0 b6Aze3N,ھJS)37di8㯢ҤK,6%o ibtUS7~kd,ᢑUg~l6XJl*+:T6lF*dJY+˯lr+ H}#օ[3ɁV6k[UftLt*mc_jYQ3U.SލS'Y\|֎5nQ)C\ϯo]>{߷h5 0Ac{5fZ+mO bv,ߐTt- ޺t,O+<X0ܬQa jP@i߁e.VA#^@==Ȭ :$yǃ[tj⮑U[+jR~m n]^Y,I72G&j|Ҕh6rӛG2WupM19Y8XlznDx1}t,Yne}mP2CHIm%񍔦Ϥ~F7!< d䃥eK_T.[| B+DF1RR:ryNQTIm<|IMZI"uvZ"n[ipބ{zV=nٹ<^X]H[/UM3RSޜk.{ݧ :i1׊п/ 6.2.h˜B o}tea8=[ng3]Rpz_}C93d/[H*פ{ ol {׈X,Y)j(D6vDNʱ0r>ʕ=|Օ dMlUE܌ͽѨ7u3 69Xw:yNtZzvNVq%pRܽA IP WM.aGbf2HUvL@=x+nИ1Eure4hVWnY!HEGYruQAFd3ʗJ Y 섀pL%r~&Bhg\'eLp%@ x˨T:{3ml8`=BmBJGUӯnb\ g:(j!m[P *jD6yh)cؚu٪5Tj+?mnX0,_AiXUuZE9xgv1:^o:*C$^ W%B-b‹]֋q;zNWTr|Uk#`^jl\\YrYUK28/*XUl;IWdV]}\[ӓųa6V2Wf(u7 (kR;- } ,@QH  n@.Yti*[nzj1DMMp,hn0䔥|®e4\h5D~ֆ%3OΔ5ye+9smp5WxYrwח?҇M=KVjlJ-Q'US4ۛI5]Fr?;Vߞ^]OU>]xr`` 7ěr~vliLf(u]RwK85pN?SE66Yxy^yn1R$U~=Ҏ59lh9if,o!<2 E.xKuUPl.,Nt}+M>A2zT8#Է/%X v%1wQ2}3 LU=ͳD QaxSHf2Їڴz\AУ Xz +\hJ?KP&~*Q9ڣV~Pq.GqQ `P1 ĶNYF4Ҡ\4ЪQbn?^Ỳl 5T>AŮo/\53F)bu3%TX7+ +=rT}PO.F PliuN2&@Iq Rq x!nl~u*;d*+T c-jMcdYQXJIh'raY ԎW1VAuUnebӨj;ckO;XO Q/B&F{4F54f՞w<@lB8ў,`!luexm+V!T#\k ׶~ǽ;;-' ,p:pb :*VVX lgkP}q/ݖxsVp@,zEAu9Ru`oK_m@OA[ bهyto?~XF޸OjyJ Ory{=ϊ@@[+'Q]|4X hnAd>OǠ4YP]7D-DԏLcXZhվd>ЃolղDH~>?/4 L+KM@kG0P1ڰ"@/?4=w;o$C ?MNazCDiX緋Ng A~0MtAk]PP&-QsJy*vn-?ҳ!># ,S>U0:R(o5D^%. mGXBvH⇜^9 FE[X#1:0o"CKi4ݱƜī%Wy%$nu*X7 i;J%uԥNPzi!I&зq)i™&G#)Urm'fǍOהT܄bV*4nF ?:.Ƙ>}epX YPG.)&hֿ3d*qpՅPp^Ũ)éļtXRɶm2a3aJX4V>b[~̘Zi\Sͅ!H`=l+)v2nm XnUup>@P f4%98Ac!x{[[ljIoyrs).ɝhNfk.] s`ՆKǭr4ⴉX5'j0];"Bh$)Oc,6Rny:>q9Cqʲ@ܙLޞ4[Jaʃ*,LYJV>b%  ?䙬L` DZўk;m s_WBL# *<2}q$<_38bCӂii݅5b~W||àrLnQ3 oȹl @ӌ'.s.iC Ô zéoN$ͩCXiBtr'~t1pDǺ{Y .?txԘfMWl" g^9^W\QLHsdpmu~{Vl_-i*D2:@6d3#cm$W(T8w08Y2$ ggk _ߓ,pXBV_ ^4u([q"PtLd}{0B`.{de^!Yu@l𞌹5B<g4FHV|dSqϓt-ob%ީ* /NQY:jEƪ'Ү<"ak!y钆EbLA('\ Oo@µ.{ QF2)JwwҶ4 fQ:3Y>!ySY´< q&q$`vf"`[fپ\rN(Ʀy΅]x` V1hP`a6 g%GN %}?Ȇ/09&m`.f[mWH 8) 98/SRĪ"}j, I@m%atZs:6PD|$F h2@ u:[*4:NxTSꇓNӱNa9E#lƃWo=v[=wA=GqIςܳ^Jyg7p *L2z1sטR&`HE!2cV('v6BW?wvщu.Mɸ1 ׼Հiokv&Yr_ǙNdV:)sKC*uЬH{43Ì8 g7ۉfnEyHN@kE4v:ZM\F: - I)3\pf\p\L.6mlKuM׷*R &︟F TJc'YA%~) *AY bĠ^ # jL>pφ(Q2Qb(iNThfTQ3ns1Թ#1K rMiẊFD$n,"x~G1b;*߫ iº%kȝWPXCJmXj\(<-DFza _&DC܌1@(<(kA`@Zn%o`m:!Ҵs;S8YΐR"6ieRbj/hkC3= <3;XQ@=L# l B  ΂L3Ԃ2!\z{.{ |9_K7'!oCXm,S~:Cl7EALR€w++:"|5^8 Q 1ߊ)ʟS?\8t?dPB̓w|Q!cs`%QLA2 Y33l<AF0!svz<{wOF_ |C[O" `rGG# a/|㟻BEWɅ/0GG`Gݣzw)| ;]HzdoO<rwutt yrmΓg/ܙ]C]z;z6B{'W7&d?`OzOw^ۯT*ITE w/ɥ;>v./ sp\̇< >GO/ B;_l)2> u]P9]<de.WSSYX+3~ Yj8ԝNvlr}|HlM ;:;Wl FG~a ;9?F h?W+ω>iŪCB}7o/͛W1ՋBSߏƓ;늻^yoS@&ł}:{ Ai/KPG'Ow߃TxshobYd?bo5,0,m_;FpOm):x_?()rxrO`{4O(b?ivClϻg =ݿW|*K;/ÓWE$Q|dd'HGGGS,~+i *}2]-3v`*5UŽ~?6rނi;:0~3>9>~2]WXv݋ׯn߼XB/T@)1"3G u/۟Oeqb~ .~Yxw }ݳ ?,ˆ}k ǠuS~K!.s90eKv;ov:|:;`L k|M2:  P| ǓΦ518 ? W]o$[E ]Z/B-X޺/Q, @Ճm(Mt 8"iM_K8}/ӗʍ[in\α^NQٻ8W ?`XNߗ53HIEXbO3(,~ff%~}tpss4θO  Q$+1m(Vٹ2!QMޕ2qw2~c _R m٬։~/oVx#3t;ﰄΐB[R1Y\vr4%m wZ&N*IC{HCpSbXɁ da( P f1&1#W1d+GY9C(+GY9QVr(z!mBR.%0h{we:z4眀#P DM`e)r$@h׊Al+p 1z&POـra'}ЬLhNa¸&Mϔk\&7Է]xz%9Uvݙ1};&2}Kρ~B5[٪U]!)3ޢYe_˞5 AL!P*?jѵë|м3>uXsg:O6`҅9tdT1JbT#愵1mR25ưR}<cXc k&c #ĹkĹkĹkĹ%η jX!{9򔿻;_?Kћ@\ 6dK A&Blo)+ۺ 1U?sWTUzE7wIYtĴ~ǯJ.B] )w-ܵrRP|AXPČm^YMyvM㚠wEf(il!wڥW9Ķ ZeP`ʒ!;(o!ZՅM0WS!~#]ج%WUOؓiYW7't%Ix&zwOoS.|뿟W>˳?lomx/KolP9 'g7Ol.SUO_|G}6sw@ V3c:P%w UW< 7f;/Clg NNAu^X~F H9lO?{jvƦҊ0(NcB1t+xU{fx|Ct1%w$ f 3.Rxf :Et1-mH]v3Ps/CG)1R2[7 蠐VRWeŠ&T>.߅W_'8aw2=QA7_- ~&BȘ[`[b7n!DAKx!q;#c7,z͕mw!q[=pp9{ݐzm\F;v{Q͞(yZH{oR7~P |zI)Ďޜn=C:rz莕gsC Hocmocgw"\BʛmOE6'j9NZ{#[Z :Ѷ^ē{M-_רM PB"[CLJi+ZȎOPs,b`gCY,h5or"-o'?ۗ{&{:BiZ}P;(Rփ6c@hwTPX>`Ju}TYb0dY6-vZCj s .wRW=j1DoץIPQ%A,"zŭP|IUkK6ȷpx\B<l6f$k뎪Wը  nEIʂ$ Ǚ*砌ʔEU@+D"-'CEYK]VAVbM /Lu4D*f b+,!f#XyaUE,غ#ߧEENVD-jmAm4Tf=]||>Yw(]ڝGU߻VPcB*tת-,el"K٤b+KxDa /BgMmtDY,e<{"pۨYEid֓dɘL-ht(4 *b%;%f IrISCfʤ$)1hL$Apk:q<(MMtF ~:v*ט)ĖqQS*9MA圇`mrZESjs"J+WJAY$HYh9S.vyV3/t}o :?`-7k=&G-5ZĘ9#\,C ,o7k8n)Dkgq Ƅ0O XVoX gNV )J{?郔ӳt tȘNА]:4#PM6SQ)X`0 \ q ɘX,yv 6aȳRcF A5z v'x8%wywq.٬~sJ א] Idçn1 Mk^AGuPSFAqzD%R6vv3pz ]щ:uXC vz֚nq_RC5yKŸqna(hf V,xh)ڰFS&M-;0-N-Z>Yb%x\. oa?G'ިϾc43Z&}Q}$fUҿH {&X̣}Ri<[W4hkmZA=J=x.g#&}#&M*>ȯ <&SW5lj;8jť'_ۓ?[ۓk{ 'P=rZ>#r\l$4//JT7p{NbP u|bXf:`WPts"e}ϯ+`uI)T5rU>!ct;թW\)$C,ďHSdK;hX1fx80{І sܮߏ7yWoX3%"KoOX*ڽaM vu ˨9o)AFc,%ineXo|;PΩ`^&Wn<=/9cV{jOsU,џ/#FM0+oՏry#'!|6m?a30߈j5.˛_sSElGXu5>Q@96_sAg/t;ш235|)jO4BǏ^_(Aܫ~V9tH=c>d2!t+ݾn$ 4*y>4'ڊ[RFf8͒"S](ȵJԣCzұ4 NgBd˭x{oF;g\6.u2E,\>+tP\d^tX-{Seh whpPiٕ+ΨBշ YgNE>T%1 QT8,H{ư){RǗ]0 | wg&m AqYs@젺? sI؅,JUlXVU%h,tBM}dc#Rz͸1$AK./d HBr)"t$_>,d_s!NEj$X|CWTQQ>!6c]+T8f'WCUخTC@튥R G2r@pt gٌwi1,6p LBL$=Gn> U̲ yOBX%A&$fL+er] &g"RPU9-Hci>4MӇ%F5g"Tљ!EUsX]w1L G"3@_ *Vĉ9Fd-1XXk{$\UZjZDAXeF:B++}0+Yr*Pi̓0,?/'{ƸlAcVZV5h(A*7WDMXfr^hƋz)JktPtvZ{gk+eT^VtbPP,1Fw"2/^-k^XE}sͦ5Hvt$d+&?h$ `5cIdڃӈ-^`ufx\a[#/i)bYzR5od6Eۅ1 RgijkgP5 5y#h_ : KK|QWAZ2YF#JGjL 9¨&Q J#mWL WQbmZ{ Hٮc+ث+t]qa(JFpvǏخd8ݔP-:.GDZi,QEf)#(~a',cBqP(]vzBKmƘCfOV7B fnMAmRfpxR+iJlCK/7`a (lTa-cd;Ćd>)d'ݔ6zaa sb@0 X" d+ w@#1Yte$~ ,>0\׆4*'ja^怜|]mo7+B@IYdn|8ͶȖ-v6WzHI 8hYUd0]{%nT D`Rؗ^-9] e l/*t *rܯ[i7F}J4KrM$Ry,լ]8j^Ik+ ؛j,+ DųA?(=pJ@)m ~,/R8QwŠQoC)N.\,UZ^J1NPXqNpDA)e8).٬kV(=ҥUoC6GۄZ,pv#H;q"k?`B ^';K/eb"ߍJ̐xJKSW+=@ %OJ'R_ 7<<>3~na遠< `8P/O^h&[[ϟ/Blm li~x3@}}uGjLjCh䴓D1zvt?57r1W`櫋W S~t)?K98/w[(Πԁy?lw[4^m‘[eԫ}p5aazn3@8:4?F L<`j3 W>m_lnl2.D &`D&k<ڃkxm_/Ѯ8vhmHd+^.QG]a㠲/``g m+`86ef״MI.S&gB`_?KϿ뷗GHgr~$=\6Fof4mTm,6≅UҞ`t2OĖayRLL̀jÆdD:Q᩽E}P>=\t q :bN_ASHQRv Djxrrkr]lc z^.FȡPr\/˽XٜA,Y0^@,)jv-e$]* T2vL] VniN1^9CDJTyCaF^2ɬhlIRlF. B)7TK5Gp<`mF I6 p,8Zj D _lS0@ѐjKlc"m2pvp)bY˩OaI1/M|*b+$)P;Nbe PYVh2頱6:}uTGp_SH|rL+ Y IN{/-mFA)0{X1n.JؖJTWS ֜ X`"/!EM4( Mbsl} !4*I:a>qR ;8~ 6wnv pteBO.=-l@`q hC^Κj_) @z#ĀP%Eg+p6hXIg0רP@/O (#[{ KVG֜ȚU;}ZLrh 9N3ҕ$q/ @QoC2Jq YJdl"CcavP2ҵ/Qpfmж5yіPkr](zBho86 }ךݨl(^\gyhӅB( ._WJ'cÏdEz(v{avk &{B< 'ұC},3jv:={CG.XCaV8ؙURZ'MN~F=5J#~ &9籮O.?~׋SS ҇/f/"NS^ٚ*_WM175gjPSG1'#t- $(Vo0$ DH;*%np}!ʐCJS1*WGO١ّd&X;F MMN aNp͇TA8!x ")4z/}{^PΫX uŠz0ɥXsvWߪ0R;=ԪH?*='*ϋ\Āp]Yi8UZYXJC9&6b(gXq6(GcE5`0F1M;̼ G8=3n~IB4K%.ҕ?9aHor29]Z~[X}׷`wm>@'!Z X[ 0HPA=7¾m} Nv +dwdA,ҁc/f:Hyd/mbetЍf&wk L6tɍC̺mjn[] T$O.@ElF7tX#rW,Od>|qq571[uԥ:o"$?VWvw>;?7Iɱ$%ǒKRr\? UW&ٔ{bS:96TCTG- S]D _I5Gi^RFuϕ "JM2ijc RBZ;E-15I\IAzkx,ia r󮅺} mqL[~) LXEJ:xNu}YLwZ{祁e3Q+zIY{1Q./ΐ(^d)b^v|r6@)Y>->d-uᱏW!s*ɫ;&xu]Kuɻ?F k7" NꓔWrp~$NvX1ˋ,?iiRhF ^-z+;˭Rt=xSoٍ<_mr^\aM8p,X"zXq$ېvCGmWG}u/EdsJ(8`}(}0f|X4Os6|htL! ː+c yDyNM{_{ChoV}0;wI~zo&NW Iy]'Os5P??< m2xa&l$VG127ƾЍ9^6!g3| .Y\ZR|eSU*gy:n:4G=t`3^vŃ׋2q ?||رŕ9*oz}5pd^\|0JVyONsTZhVw_+נK^tpb˝\ܳ{ut`TV[NcU4[ ,޷4baMKNR<.뜧jS83W/IaC7|p?-ɑ|3Eޕm$20M0} ? '5/6I09R˒fgIICZC6E6IQݥ_uWWWE+:iSXL)Ou{ؤ[_kE N ܕ0Z5&Fi쏔câ1AxQ2>4i-b"t>}[WX1' h KaZ 9u9i99uubA]1רw⌸)4s>;Mq6G&N-܉?s|vVW?)@V W{(jڄ^0sPZW/} pWhw| 3tM;(KBƦsҍI)ރ揕_N1c\*; cq :pe PW egAY*O@.YH1"AYf Pgm+̪ W,BT\%AөƘ?-XJ"`eB(6z$đbih[FmEn@/u[M! yLn})9 j`))DV\}#9KGc_z )&M*'PO'^G+O4_l$Z ў8++XA՘!{+VĢ7o(-aZ≥%%ѲV<&Vx*h S=Q$YӖo+ 7<_q1@"_ħ8 8A 4 D$>,+DG8T$c#-uݲ[!,GBl-ݩGR1~c7y1T%0{5O`c$#L"A`Ap2 0Cc5GGَJۏCR'r@ =iK.buЄQƱ{=`%HHGʉpN d>+EEߏ>OЬv^c 6 VXuZ )n/,At`ng71bG6$JxS{ZUȐWr?VҦ[*aJw7E؛PRGiwgO'Ő:C֓кp ^chx*ͷJטeA|07w"N֝ݶGI=/$X^aLZ*a^݉6",ƐaYK˄ b~B4I| ?FhcB'!H P>c"be:,S+$r?쌱ggO'[,xŔ (E2& \VvLr#N9PDW&&L-z=m1$ē$Ks/8qWǶvbNDZ怎S@*v˖@/lK՜'5%NLN68[8: H>f;[+QSDKX"AryefϞC60(8irBcRG[Egԡ8Yf(vzgqAg`rf{8ʚOq~6Ѳ-|J^CP,_s!?3ҏ(ִI>g` fjci uS$v{7qg3/4+t0=+FҒ8vn͓ZRN;GK9R%G+#UlXrHo%n0 DNf"1ye[ęb:E>Y )bAǞ8k`꽚h͹ٝC%2 З4N/Kwm*JM;/lPH76^$ͯ;o2\5Y\Q5uksH0u_[ 79<;_Ѭ(8~nA#-Xym~g $aO:-Z+X8+`ъNRN< 4ٶu(@um]II<1uq48pOsGA?mN);^x$!~uDb6C)p E*x,B^AK |b|vS, UY:.q㥽C}&bMX%QS'k(Xu;1t)(ߤ2Ğ@a2f~V\>Ҽef[Rjk"vw[r$,})r2Z%>jIYzS>\uRK`,5[R >IǨ=͉qĔ ^a+ r˲M}/+INlPי%ܛŠߟY [Bt;NwV|(= n>+l6rQѱlVo5tCl9D5#YHvDb&%Y隀UNZem5N=YZe4:+_ U"9NN;L3_{~BH ̚vۥ1cҘUj?N/XaT1Ɲƒ"+tp޺6D軵 삋?wś0SO5av9숪66kHA\Q0;{Uθq,rg+@ wvsmtig:\کo[SgኝEqj 팛{uF#pMbfzۙ6W&s#R|gىBҝF4<} -ɩ(JK﬊HKF>w`-HwM)֤1eKI!qЪF' ;X :u2$853#]=l߅c;*Egg8Ѱs{ (wxcf"tYIbnu5fCRט{d#Jʴ.جQr&,5˗ z͂[ W˺FF]TT 즞YIz,A$OPyLS^$%C:+cWz(f*''SU4%Sḧ́Έ'mNFA֖V8*h#9:]Tֽ~3jŒ>ZMisYJl̈́\79 󽩹:lQӋ+{s_^!Fog 4ͣpJ6kjQ -!QYGI-@+YǠ )8|oJb{4VQ%X15T;/Q P',Ŵ;w9CVR҈ |X,+Q9 \j܍$\zfQ0=;uvM%Қ8"w^m-)݊\ʯ Ʒ)p72װ~zi:5ćX"N_NBs1^_(xu1pųKt{_)8I^/Y6gbt<2 û_- gJ*0#3&Q7[t`Iԟ҇Upz՛C:Ǘ)79b} ~|=]h6g.<=L?_}w'Fm}2/o]EtmF@q<< axhJ8k"z|eПGDLJ޼5|4ØE#?ʃ5I";Gxpo:ͼɴhʹ rf/GחCõ/Sa_ʵ7?o]ף$ %ՓyOW\D=\I B( σ^{  {B ~*ɞJԧaJ^˔ ro~SG}?D?WS8}x ր{|~afW0ؠ/3Uc;/gGlg˫Owi^uEV)wO_Zhd*)%<e NW8?J{_x D`F|߿?~ffo&!:a<_&?x`&[KxƟ! =@H~-__{; 5`Q Gwjfop#Px~Mʓ%N Hl ƟW]}7E'C⬵0aʠϳÂ>Uh(28i/\^>'[(7 AQEʢ@7o7.U.8'e"zaszcL/@DutLTpЂA6zQЌ(<{i :`A 'p-{јAUUe|ݼ:n4񠧡iLKAigQavN/hJjb0KHw~WKE"ӟ}!b4(8QB{ q cC'Z+ N>6D{WXc0 TdkKOӿ-{CI1;3ژ m6^AC.6r_v}Q r%3ȏu x5 ̅3Pz=6b[[9X^/ +6 > 91 MDP?U!W?nJ?vqY?M0R,Pڽhǎ.> nio}qy}(5\('u8#qu|:b+dK.Y\ĥE`$ٻF#Wxg=;a sx@ْWE`F)xIU*<_R, Q[^So9XBWwxkz>|όi#%ݞ)ݝ AI]jkyd[?ی_IS- owQPC ډ?/z2,(wȠ咱%J㯅IL +UGJ(Wi*jMWIMB2BcyF]`Фv6h!)q_|A͝ɝ>G&p4{T0bc&2h5uZB>4$Ĩ#ehDQ+F)b!SVY9# J%p&>qW[9IF9I!|SR[>J>8[`M<ξx&Æ'5>^0+>9},~;<HSH ppJrB 4$nG耒hLp0~hN<46xb{[s߆PdH—gY>="4O8.dH{]@{k m+  ʯXI$J)i794XxКJQZ1DD8wk?/&>!m `Rc4 Yg=OWՂ%É2z[FUU\bhڹCӮ(,``AmԨ&⼠ӘxRZJ "3NHY ϲ7Ѝ`P\&U}Wd/ vkPd](7ۃ[donݢYPST[9GMrnW @ )VXhDB U-QR,Jke)lb &\ޥTjFvՄÅO ]`,:1d`H߆ۆ*= h^ _ 0LweQObmV;s,A@*N*Bf1v.1*km_@P&ф\2:F^D cP1βAl%D]OrkNۥAχM}Ӄp?=pqÇK8> '1΍m??g)6 xKC짋\$"Tǂ K; ~ݻdƉ|UGd>}ypugj(UA vi} =)$+ !Z". *%I 17z$ۆ㹄43Mٖ}^U)4g+Vg,;e arU?ZD(tЉ#z=y˞󀚏 )Qbt+hr 84hWH @#qaXý!Qt.xMqY1y \s9=mf}صshB#ʅZ3 =+IҴ^IԿ 1s{ǒzZ7ež!ٵIfR_B1`j}5JFD_T:!(YDḠuR`kL*l~]V.:Wq=T_|,l,ٸS-Y<>m O>Y`J 8"ql^'i{ 1^|wߜjE=0q&4Nf Z92J8idVT3FIb^*Hb!Nl9us= eN.s\NOn NS}#i Ni^/d粱^fNhuhpAb2UkN|?/;#Xg R%% Frev>Ti^Y_[>H&ȎJ/>C d I+<-n o,Ƶ5/vggG4ɠ%_ ǓЈORU_F(W}/j՗M1rJV4qƐcm"5\(ofL1AwV7Ȩi@S1C\5*NuNZ RS[aa&Fu~%zp0*UEFQ5ox2xB0peۧU"/&۩!_Ru7>Q[_8J&c͛v~9ėG_5ZOgzylݟ{:#tƄ eL0$jP6+(WoirSPS[itpOcɠXQrhMɡŧm֧&+4@QO)]h Atݴw{*=F`X0nj B0u{"vw:u)eN4LyJ ŧ6҃KtޭNPخ6(FzHa`UhuT6a+=L aC( 0FU-tuVrZY*t]♶a)<6.\<\V8HUB 򻋟>ϋa-DiLYiy6ʁp(Nxb%x8Ls I:bg]߉t&֑I$D"b"?$4xT|s3;A=0FlZ9HTYaphzJE-NO>QFG4#vL-T<٤ؔY6ev6Eh1T2o/?[\PfYM_ըKzBl^݅8[g@{r?Lp =׺k6'TAiV ﳯs/TDϊG", `,W['GZ;bU;.jIJ˙}oū>B +Bn 7RڂTU>Oh`VKې]BQʊgrnGwq0K#uϗ4'B^wo3oG2n>_SF>@`܅x^Gt֊lET6dJ"ySw0QnLGJhe.o};87wyHJk Ydu"; "p>]Gdm ݐqG[ՍdTO> ӫ\GCnvm( R TG'Id!2N ImGĨy J Vv5º ;Z9-tUl!}/2YяEsDf2߇_~z>zE uQfSЯx3BY)S"ZNןT@8zN(Bk{aCEi|6Z2[M,F@if<~Z*|T # yuF/Egr`0On'!e5e'k-9`zPYK<WVBv6e˾չDVr$uDLU1኎zȆ{ױsܢnqT`ﭢ-\=~<z-3u6H jiLxs$=(sʨ4`T @6 t.eE &2ȟ֤OF[i9x:PCrI8cU O7xtJy9Z~litB6@su*au)`Yƒr7tM nrrLEFvy^ΖgE \DY=>wd/GnH6s5;5-WNw+)'ɟ{< V~ҍlu6,xג&9#H 9B|nUF/7dsȆsz"&!CO1?,(hD"g( ~9 _#?W+џ(((j#pŮ})auuu4^{0Gp,vL~r?Ys`sx (XM4٘pt!s۩9toB0B kt pnHX{#NB#y:`R,pTT4D1S{G~~`i:OxFw9'۠Y]G-T<㴙$)ŖRd`P!C6ȨEw=]J?msdе zҜk,Y-fT3$ eiˢ vz<sNfͺR.Yt@pBG `Ɖ4!`5J1Z>8M1I/F]ߐ6P#O ~]jԇzu_.b\|o7Ƚ \ U{ mB㺸s 4+qR1jkQS c2&\8<ş6E5[i;G'铳G_&˛]ϭamPSbabͮ?Ns'jNhvxNTb?{naR'奤{4dĀ'xp'PSAlcAxcmK %>dʾ!~-GFvUtkrȱ4x-{@]c<}:2Ӿj%ayCN9]pk[fE/W8Z>&`k ssk9Z3Ik?;ps tS@Χ9t'0m״/bl 8i5#>f1i A5;ؠNf/ߧEϛӰinet%ab\dbg= z (!c(Rupwzչ)g]Ʈ zBa PZ/8q0xCA&X3T`͟>;F8n$i=z{oʣWx]]ٽz}+Y$.F/믳Y*2ƣӇT.FvNE1',\68 y[e#c*=8IȘ\~p6m|B>,ao3 3 w \2bf]Wѽ"ӗᡁb:q~cdJM?oҟBSH~ Oz7E¥yJV(8"EJ!c~V2)5R$)3 +:uNOZ탮F#)$ePϞa3G_,$~ 4 L_;_8nBLIX5j63Q58GX5B}k Gϙk>3FQSwt/I%:{Ijٝ$McHw=Y}VfuВ>6&>uF"LP yS Vy@Iwm=nJ e" gl:9Sv3g4O_R̴.=bݭV؈QI~U,VhZ {y:RӚ A0[ۉFk پbG)FrT1^CłAYc%Qh`45qFAE0$whM!t(,cf)$zrWU'4ਙ^Y&eLA\)]xpK}$zX4:k\ F4%C'(59[9Q}}et[*~/oq@|vA|q[痍W|KMח192i.2~J BYn~:ssO4]sK+U#" ryGwTDKQ;h]њXD=߿vzQZ0M*J~?5H OG:Ys2h] |rHwv5e.jˁ jkp1vkֲzBO"y$ڢ *"Tl}dt|jQvtKZhu]Kaux{~)o6x߽Y T\tأo]/~mdTDӆ0dO3W=&17kGcQ * uܪXrO^\fTz␯EKx 1FS[ҍF#JR rXt.lT`4V\|,z8O )(?\tO R.MYቤ\tZmTR.QnArE(*"a甋A+J^QoI?{Z k j5Ⱥ}3'uf^Sm4)D@W?.% ҢPEc'49kYj=ҙ1(֓TSjh+JDb_)DK{o8Ϩ?}4kD%OxT_=DbާXsȇLjZ݄ON?E07Q=wBQz(AԟAyzޯqw4v b،<\^T0}ג{k&j/j粪V9oCQ)7Pآfs1KYKɘ (_JsH<~iÆ3N[a]ɤ=^:忾oYjA0%rgtG1u"@s 3- >p9˳Jee>#j2@fN#7777Κج15B]+aA($1z K=PŒL)g5J*1P:`&+ca d9)&&H`34Q/PK6)2DžF^m]vBPpvr`pz4^ !BǍ vRi>#ȘXJ .]X[f%) yw_Ǚhgo<]L8e-(ɶnU6(9h8z)RP3P|Gv v:2@!Ӥ"j 5r!ڐ y%AwPLvHc@]!8I;s#;_cd;D#1V j-*u2'it!-KzK0?E[U.6URP$KZO`) t 'W]_K&W54Й=Eq,aw!HM.}aZ7; rpB…+z]QJu4J: NUEt/xAǚ3Nki$ۯ} V@;óh2iHptz~jD%xź߮Y#:BVVM]YK#(Hjc@%RƸ/Sl#^p=ȟeD!¡\W. q{` {I4YCIx3@TfShOƅޘn C9z`\'~E<Bκf#YQg%zIAt"ϻi:R2c Jh~P"Yь ߝeS9+A4Ck*{9`!cmCv I|DA[ZP(SY<:!*WU!ڈV:8sR*z5R;vG" -u8H2ƩHq)i_*e$\crw`g+ U2'-8؟ UqzsÈ豂ZCT(bL;괇%/>ݙ39:I%굔5~-c`rL9Q&r5LNǝH;]t6E/8SҶd)x@Ǎ*{Q & Ngݽä&nU sF:`ϊiiۀ{@[uo;8w9^y յ1m"x$nd'v+DDs!?m9tu@Q>e6  l|5VJ;p$dJw.g)r.g)r֌l!\|PɈP+Xk Nc!é(.S:PQ-uxy('["iBSyB2uΎ$Zy=9t`wлM2VsE10++"QBM0M.Kx~ȩ] @9K̗(I5P*)&Lez7m꬚Qhg=Vd.z L8}=e4t~UÖ f)kjf)kj̚j:lE܃jdH. rGwZZPp5lm@Ei 'aM̍)9Kad>"Pnzב˩UsOk&x<392g` A3ڂށGth5/Cyȕ Ǹhbw;uŌPu7P[M{"Bt*;QV\u+8z)PM} Hu޽7lux;MYSmY WA%Q]T6{wotu~FAI^m oc<mo?fR,eR̬4̹ 2ƭjmR!hCT|3 kÔE#20/0/`Vhg3ҌIyV7tc8K7tc8k6pN8uJ'ix2L Op3-kM}t".p iL./f""N&u-ϟ%,gɟ?kےi4*煤f#]^k:j(7F"mY:PѶ,|?_8+B-纖 dڟuLr` V'Tz?svEjE@}zˡ1aDkwS1?pX?d 4H؇iG'iD'Tz(e~d%|X].V)&ބQK_ !V Q~'\Ǯ7Od^kPGSJjclC`Ej/ $GH͢IkL<d׻(#g(O e];רe'(Onӗcۭxu%CStOtYH.32ڢR1 Z(QP" Aj`Bq'tGVxZxtcg瀵ogg hm+PԣY'u/-'ciDeZ)5WTřsR`0Q:!==X i/)FOȭ}ɣ- PjF k$ZQ *qJKrkh@{KeV8݆SsϾFDn=mvMD[p:999[ִLiHQr0AѺ&L2 u 2_dƉYz=S$Rݏ#,e*jTFi.|:養q7h͗?c M;1߫geUm],b7Ȑ)Tn}qʚG`Pk\ZDðbk219__6^!/WpV 9Qm 5 (ESh^(@WǣYE^.bi줙H+H}JPz|tYxX Xk0 }'H1@T.tVz.:3iJlQ3˚(J8N! ,NGEd.5k*Z3~.LG(./ʞx,+MvG7魏[9jn= V=`_,Ws3/28 6qۉ1ߟ,[Ƙ4AԞUjzH93;s*3Ϻ]Ebf=F_DFZF}Fw GVze+?\Nќ2M6d#oQ6`Dщox+VNո 2T[#;ЌY88b\xH8`1:$r_%ڙCJ3E86$aGsX^3J l̅MVz9qU缣yㅐG=rrjfmj@t@=^ ]m$ ux t,ncʼ-Ygc;oOXG=yi׽dVr9FB1 ޙy6+f=  ɾ[>={}u)ǣ|.<=PT2Ǎ+~1rP~r8 )N>$O}l_;K=Ia3=տz -|o' l@El6GOnՓ1l߰Eafhyˠ.>Huΐ^!f (f_rCBWH8= ${;e{DU>ajȞprY:r>d,&j;ڽLӃ3o t=8\$!ј=l .'ч_X0?~jFLga4˂«O&=uĘ./^6;.%_dVNY9IfjV.Gч"u{R{J؛IeBW p?uBd~0I 0I j.Ctێ$c$eOR$eO'ď_R5sߑPՈzOA5Y%r ̭g$}%=AB3a!HH,,#B6`N 䔀TH XɫCHZϮ>&!dk[6M[ZMNLvM_3 +mgbrKqLGʹe4!^QN" Ty,4Do0ilp"eL*lm֧2@FԞEE^ekQk19F\y&(rE"zv{OCk@&CҴuD#2Y&lQjPqg^ׂt,ك_ źdEÎ˹\ź5ڱEl9T6YĐEy[Ukjbò%:vUJJMov\\ ]~|U npޒ+P * ,PmY ñSrBDy ~:TjXD:U 8m˨N]ݞ3XQe{c.C~\pIù.+Qgrdf܊YWw8d%܎lVHv7/R5v,-aACO*-HfEa nӡB4zm@ȳ<(nG6%MT^XѹL(1nC%H#0sS`8#ݞ5( Qg>lMb_zk *7|{CeBVºKm5&vNbBpNM]BǏG+Kr  'NJ S/taǖoc<3Lr+h_<1θtuBDc1qL8Vt|&R>+eyfR1z>IF´EN1{ qv+! K,(x FSdEn$QT kʫL(tT͸f(|㩿\rYo*n&,#1/ [5)wG 590̴@^׹/,\Q]ů,ʸ -Fu ۼ5(.k QqU# "gѠ`h&[n>3* 3*mf7I^p#.E/nsӯqirc~m{|;/dtUݹ G7aǵg_rcAlU `~> Q-'y;QOz=I$;&>m29WCC',,cDBa-Fp[cѽO+6f0G l+!bwH7NB).-2Hq.]Y)"rb$Jw"F<̂#1W>JkΩ ^ 1o$vw10+zH@N; ˶Exq'"МE^`$, ŤPdP\O`T~T F0K,eeц(%B\u$2`zDƅ+Mp%8k#8KvEC$A9d\bH;$X e Z0y-Zo=6#LYx$_ʑnoJǫو}y?IRʖX|tBd1;,rSRsO>(.#Y\ٛVktX]XƣLz]O" +N#v" fY}AX0LAld}e.W+rT|ZHp,^m0$|3wswڗ;g/O:̋xA::;k\Z"N֞g` WJ%FޗFÙ(='P)BA%Y"&@ƭS4ibְbo7Gϡ/Էޒ˸jb;ٚ/ì1 +ƅRS}>鼓DXc l4,MJ٦G̦H"tn +LPTRa I0 Z{/K5iAPw}֞/ن\m`qoG,@G[.h`^ Mqa#3rBŠ,6sF^ۡ HNW|~Qz' TV$viwM:YL*t%_鑿_{xǿ|8%.)O/yz=ycy9һߦ` 8jw[拕pu%h)xw;Wf*&lonD`m'H~~{t}Gcyr}+je2Arĥ$*GtGU%M g%鲀JyN|GSX`:%R "[jS\Rjx ^PH\:!Cf)o*qO J"JuәA5hj cڼ imnk'E2s:>AtmgI9'`(nV)tڊ՘Hh+ºe@7RunX) 6~mRBuc1 G Ơlj-ִCy7;zx%'gɭt)YXZ(U$HZ09!8MS)(`/,h&G0A(|$%`J`b48b 0 fAsmUR(Nt&`Rp)~m\k0 e%c{9do@ .!Ђj6\5ۚ>*y<ﰱJzx[jhB pmW8ٕmOz2:lӀIp`F\DΆ0y.{mCpp&8xL8-9IQ,ë;d3^^|x_3nnR9UTs1=_xf b(ÅD8˅֟/\!uc0*F/>\7)wl(r4W/4 f!'tki#>و en!1H##.j`p^cMAbɢ `Tdl b׾fI{Mm='e9ȽY.Z-s;[|MY\2ݻv%FnfB.cB 7KqHdiȫm{ X14| 0O 0Xxp,)׎aXfIg a8$'JbGjXZ׈zV $`i!hJzAPcU ERNG: )-e#8D_#MC"tc<:&jɤ@ܣ+iH֍)qvO'l49` NB2a#00Dp,]KF^xV m3 pnU0 E~<*J3*{?CUxz0/~7?__OL!'F4wU'nd4,CF.E0ƒ&\0k}+!;7shAewt{reBʾ?+RRh|߲ &l Dhr\?pLvP6>HPv`@}Z t #15D{tGFcZCtN32LJͦfڌ#w^PVm\ iʕNnlHvp"O UwH}a_z VLVh Y w{+z-r>#R÷[#FƴIR#`r{yrhupkRRC$J{ȱ +.(!ltl,f׿| q=MZ0ӥP }{U3>uF`M` jLMN[/#=QqU i1@#ֳB1p̕6zIp:`Tk4`-MJ=վP*WiH^LPM1<*;*}ܝdy_F]ߤA\Z>i;Ioc #' /7h?w!w!ŸB7t$"B)I RV1e nv즵 62$L~0h%^On| vu0 6q!wPsS6rsy5JL JWY]L"T+]hhs4 mY4,~J lF]̖Vsar `MMWH%ʃh殴҇syE0#njuzSp(05 ]Ž zwGVWkw5U8H%݃ a^["Zc8ͤJN!6TycLANQ&߸~۫'ڌ-6\gv wgi5xSʱ^e=䂫 ^l&@I;t VllVhlŕ=%mOfԟ/Ra)t {=7Nf;??'>==Ms?=z>ѕ}8C xnJV;‚V I@qx21Y<لsy /fo./I2}zv]g}z1y%>^ygi1i럮cήKAs{[`FY͙&O Z ݽx6yoӅM{$M iHC/6V`Enju5S)lRhPE>&rwe+DnZrk5זj)b97f@Eۜ a7 !|}Z/V;@EtOXF'V<, iF8)R0z;>#N| 5oџYg߾YuɄ3lݷۻbib˾!7d^KߐyoHςiI[S̫y688]%q`QY{)HaY+w^XWaCn)>% 8h86co/#) }pҖ&Ȗ-%?y^%?y^O2kŅE>j 8fYY=e 8lI:3rk[w^k& v^Knu/xy`^!y9d02+&&O\ؠC`)2o1bHZ+p߁z)pV`I>E-4k(Ÿܾ,foqȕ"?ܫJj _^.y mO]^ӟ҇ȳuzAS~2bZv XBnÝ֏R3? ͊2.$!y?W殮Nߞ |ҋ$?\oT,0k!8DHj1?EOrN_/]SM#|4/a%nuL DMAET l[1Xic'Kc>|3=1c\\+krq>Lr'.pW&x~ qoԄ}KC~LlOFg6Lv[P18"h.LD+M)y-K nAG!9&XIOcd1i5rwEJ DO߮*.]KBN+IchF3`N rr>uVlETB{E噇$)\*G%[)[e-xT<"&0“ N^@z#@6a"dJ&CU2P3 (rV<PR ֑]V E-}MZ9[;Q}l͚(5dv[iU4j#-,:Qcʆ LNLi ƀ`Vl1DC|%?Vmڱx8x6 I-g1jR1fj%p'EPyԐB8kɟ ss?$ '˖eB@$g%ʐhNKr(^{T (;!6jx&I%@U(IUR"O)81+.E,$L!rZ}-#Da<(P!crr^jBbΒy:PrP9 [䵸 rzglP:"ӺC5h$@?Tcx4,r7?[H>=?-ݠYA*\T ,&CY̐>$YM"[̘KptQҴK2mWcz@H]Muce"W⧛-͛]?e\3ywgFV]z_-nh3p a6WD\^]ߢ㧳kF ?sn VF ]]4O Q9_i d>ӧsWyu#$Pݏ=Q䒰RfMowMvmiixSCȇe}#֛̔@kgB@&vACߌY;G6My`iji,Wrnj&F8FCuzjUP]G :qZGӞhYp\wS 5IZeH; HZE~vhyӱCQm?Kܔ Y/Q+F4D HŇh4?랎^tyC'tV%Lko }d=G|'g.H=FYDC"4u RYp\sS 4#2-5ɜp k{`_%W*|muQ gs!rSv%)Ucxx ȁjE>W*U QW "#@|Q{N)zwckI`3.v #+ᜬW^*g!s0_ay1kg} 8ZO$KIp?Ofa' v|35@˛?L_a c9<2#x _V~`N1{jLjiAH3VitA |Ua LddS=+ò2>Z)xy%iKLq+@|:3i}o#)~@uYG[⒖l( ٠4~˙lQ|Mf]"We$&U&1;3LdBk B~( =LCI{l 1`B&}riK" JJLcr3Fg!g}Z%&SNTnNQGM5xwJ *SvUqz8Y~ƫ4xvA4Uv& F@)q4xu\J SņQ%#`Ț/;-;tO.E˄5絅>L4$?]Ⱥ>#E y4岤ZnGM 57Uzwdj`VVMQS"c)?j{e[fOf=-ݒp`v55/[S55/[STۭ<@./PyWz(< d (.X4'%z@|k5%<l APdki[wkmn/E3NiK2֦-*I]JZKSM<%#+OoF.㽽1yX{qyFI  ŗգ}xƄ ?;|2>h?ȏwk,^֙78G&r*Pkov*zp>y}}~6/>_&Ń:;2rݎ(lC屺udVw˨߾)ɼQra#ZXB2DؼtZy3׳_X&3$MS|E) SPqo57zǘ#0c Ljt !cB [B B'K p a0_ 'y7 jc<惃!`;G1{JjA)=Ԏ-q[̮zeuds:$A\ eB36)B0Qd3l)Rǚ(DAt%هb*OIzv3'zy 8.T~bD7!,l4;'^O| Z=z|,R0 zKdF!LYt4\h0#w_Z`\uRड़1Ԩk7#Qqa挈ΦIOuHG98 ;c^:y"мtb<@: ^QSl@:%㚧MAo˧oD=3ȕu=f=zFI~s"VS ڵNa^'Q`o/KcvH#O75${1fkZ 1$BEH޺wku7`Tvh{ь{7~1 FcyI:ݳUݍe#x䥄N2 #3*A bk!2"w0jlgÝ&7I]y\{o=̓TѺ][u7PPҡ`侮^̭WKQ|OEpSG. R>eګ+됉qa3vGӖ0S#-XgZ91i)4`!t;y&[8{4Zc="H]%E[?vЋuA5"3.#*d21BPcVj/IJ]AoɢZͣ|VIc:Z"%m5$:C+di1fh  ~v#ojH)(a+Eva$.epJsZIZĀOSnIԩd'.epdƟjI>g-YuwӇYjh !K;3` siO%PSc>O{?U )#$c~#ψUB!H.(N7!~^ k2$Ɩ\SNys\ʍ72,7CU4 Ŀ$iVy>ѥ ɣKh|xH,kN6 'v{6!H7uuvY-/gb{:^ؼF 7h h-D'\zcrvgP1s}ǟn+~y}&ʸͬv<=5k~uVz|5tqyqUu>+# DV zrl˗!Ryno&uNe)޽Qs,>7ku+2XkTڕ-Vor V+ 7<>Zp˜L e2ӕS1o8y8푤74> =8}쏨bj<zTH!bye\LORvJͪKjV'OcPTRw?xV8]fQzyu8ʃ$M?DKufcTK q뷷շV!e*jQ>%m#l!a17)PE:isyx<ŎyR4U17w*֘/D*nRZ sB{y%,8WnUz>#N,_D1yǨ!0Z\Trフ`шBt#c*:yUz>'V&}NTUE7J^UQΡ@]32+WѶw6:q_<$8*Λtm5띵@@‘t81fdm?IϰU [ y;֓ 3vَ1I&CL`o1dg#6+E6f"8Y55/xW;z>wr7yI\L6?'alO2%ӞMryu5epyɸs>ϚktgN_1r;^p?E;_ۅ[rVFOq;Ɗ4;n} ius"?^lN*4%[t@/?N[]7gL1B .@+@oUbj2 k$GLe@h 2+n& ,dS,W+M%_=/lYȖ grlV 3b x ` ! ׷sKqL]OvEά1\nL\OT/׫$K!<I@_û7OÔ:|Z{CԔҳa5di7%V(b ՚Vgx?X=3g}PŅB!XHC-TX$襚( Aei7NᬸCbp< GӋ6b܏U3( X (:ńX{[A8{yBt pU=42#y2Rm?!2b-!mL=X7ErcTd6_f#߮vAdpKA9޾?u ۤ{1mEKÏ1b1!g-α1׼bG\  !vyJ *5R}8dz 3GI9v2rYQd?>!b(a'R1~Ĕ 8% 3=wV?w4m~wVE'NdU3x9Tㄈ"y0pb^$;ty>d~JoAdW#w!R?}or~qٻGn$Wf7 ԓmkgFLղURYReHIUґR,aV*"SG'ћf^iNi,G:.EdV@92~:M2)S+y:.h$Ft]DvJ!4u3*kP0JXC]{0O:ys!`H(DmLInkܳidxTd;~~ݤ bȇ|di͗wNhAk-!aW ֪KKM'wn#Z=X  M'4UgDC0N=pkQ}ޯwp󃹪w6aV0jmFjNn\Fa4 )D *Zn+F*U=sTVV&žg @xzdKGגs $F5MH!Mc{#*y;`l=D@s&{o7 E`[VC vA潸dCEW"yh ƈVq_rafM׸gW{3TBθi[΅TsfP5g_ VL4mU:I,DSc3B@M%pYqwU @SX/_ra&o9DvRK֖ڂ 57 3Y` %%VԔW͞;4'WeԳ4,iVDZsFDKbL<j :5Obj ֏~ v miPApm>D8SԘT$hD h Qmơšh<:cڀ :`%cMu}|kFz1MU^~vI&\rZ^QE37B౒}P(%uէ)}-Jzq Z|&EYx]j#umz+i7 'A@޾aLfSJhA*+B ˕SyDȯiD4ثW liʔeD$"u,.Um))Q`o`Yd;xWT(Ĕ}ux"I8_!XJTeߎmnd@'6!u w}:R\Ai W[GDG*М<Ωt}"[ RS;eA4ӺYիXʁ :.0< I4$)ޟw"(иM ;qp^lu>qֶp4m#xѐvq-u2u 1&Q#ћ^yþKb_sIJ NNv>Njo3оرFG7ABwrt+V Dci'W.MθںB-,xkBlڱFdoLA IU[):e~whœe_.n[ˍ#YYiYV(bܣ=%#1mO1e%ũ vtEk0$czha`3.38Jp&* -P{)8ai">Aۃ8tr}Odk D E`SgO%<D[b u @N%j4kEp~Z+JJ -e{:Q^RV(#MkQ^vQUs=b&8@]#|DxɡLɆC}/8i]  x]>p"GTSm4\1} v"/&'-X@ 8upښn^UާC1޺?C}d,-;.@{?ޅPPsq~憥J*ٽ3ZhMiea1i%9w} 78aAA4%<'B7v.Rl(U[;(Cg| |D9./_n2XJlLGixc|"OHB\C^H{i9wB3/Y~7ӄS|!cLiN$8)5"{-r480D~Ԡo fȡ8eez3yaw b?ՐX9X|#_zӅ8zw`Z;.7d\뱝!!ڻ" xzI_Rkv?%hG?̚{:|J̉maB PiSaCn,PbFLIm%$cԥRA=Jl4H&TN#JlkDS;R{N1sV0dgk|t1_ga.Hɖ)IdkZ A$<\ĥ@P3{]* yV_&+C)A:1 c:!j9qB|ڱFYeWm[pm5v' imOw!)oPݯxd/;&I)V_Y&p&ڂx./ʵ:QYt@SynQ%e]x{$+A|+}4B^@JEQ>qD$&6 hѥ"ٱV=Us*ªoM:g3~r8"եՃ.Ėt |d^Ҧ{mNmm֗bE!B!SK[#_Uow>-t_ie hrX T!Muؖ꒮ڜ*-& o7m[vJҴ8ؿ&7u ^^P,qDd=xSMw(|B6-2;t țJXfXOY]37EdLCo'%v˾C Q?߆tŮB]´|?lұWwף3[Pczw#v|O(65U?vń1 TJ'w{ӳF[{ҭs7C>D(!&NGqa6~p+퇟͚ڔO,0(eVRJJ,R`B\hTx t4;^K2JR/0{GCX۫p$NdÉl8QN4||AL}" Y)T~ !\~ގS2C !lU <W72er Q5VX\SS-& *FN- d 1 m>iGJ: LrV7ƽPCps ( I'"c$@mZHGL bpFd2s֩TDRܱ^-|RhQUm]?3EIE%@2<ҬMh:/#SRr*!M&̍`cZCukH3Q"¢ZITކ?ڞơf$ x(VrV(R8KPI N;T}1X!gvMard4U՘bz gɊTCXiKIҪХ- 4j/b=ަN[<H 0n|f}$thQSDa55K3L?6e6t8LjsA 3WunBԢ t2f,hU%"Zsyu<512Bn@(~)w9z{4*i-OU/避[_P?jIA =?wdžϙM>etp[|0<^2;j7C4 fn<M)My7/]E3]ϻF"J PۃSFի,.hX0m{)|)Q CQϛ4gтQAGzyd* t9Ƃ՝;8)o3Yz}逇Zi: /H˘"d'6ki럅y`_fA ~NŲ;)&3,h2vru9xkcnjQ3Cq|5z5ʊBpGdE5ꇃ0] gi?YBEv5}E:OnQ-0E~-|oW(Ɗ^ay4_&ؐ4:({3o}I0;f~L<wUO_at6JTh+YR++FB.\DȔa6gVhk-sRv+C k $Et,?E6Ƴ籽7hY#+7|60a2Z0P (x0nJ ˸ٱ~mYz|kEФ6r+&0xuu"[(M3KBQ/z D-gfgfwgg.Kpfaw"₈:d|}a_dJe@ԝW\:?κ;Κ2)pgلnI)uǮy<ºßVӐ ѹVX_"ȚBweane'KPZUDVI[Aі2;Xj-c-h:I㇑wH#'[ge>FBs%;6KǵhɕV6ḳ'3*qǘW*=fYI&]ԄշjZ֐ mՔ%鲹m@豗\.G\k\~vյFjo;5 LmطYvŪQ0fQ,#LO;I0\ַXn3^cCSʦ䲯hiwtC" }d@Lzgnq8 L AN0&sLݐtCh@k*Na&Z{ /!`p%9-1JF&8/ZReH4 C0"c㿌rŝ*B+pmeY&EEBj-G{, a愮 s.:..èP{|LEDfSPHST2K* L܄śdq(50y> sf ):I:?)f!L I80C &M8JK%ѓC0@o2#ZBo"$N_2C3dqCJ p }`yFLZZ ,i}Cx=_Q\> zI$L"~ƳY\w;uti\l!Xo3.OvwEUȇ/׆5&VZ%VFTW6!McR-$?nw; 9i0)a#YxlMg`Av[\|ɬV]= uf|/2-_,2X5.|cY{>VԂ<ԂIOFMpTbE\1/}i: =NWR^Q!mZXmsK.D~M+ֹVvӶw[4B-Sa鵺U./VI4R_+jk8Nh$*K(`^SNΦWJETOF['#:̽A sx(Bb˩ d @&Tw&Cn R2@*F<VXUt9u?z3\wU:Ԟ+ i*ϗh<5,O8СkSӠ7y(dŽYLl z<hs,Y/}7fXܘaqcMzX,aܯ*#i*EC'; FQ / O-wu'"w~ _ ESuc, DAb }Y4#o2sD50VY@FX󩹲F*ݗmHUpw~mj9&_>{LTړ~8Ɗ nhye |Q6_L>xP,>xinPrfFnFx}/kRrOal;&d~**ͰH9[s# d27xѣNwkkJNג@#g7MИ޸@TNӹn/퇷UV&Dō8;8\ wXO,SvR+|}Y(Gp,Y!/WLr_$uy_;u\"Rq_6vկ8yyˊF_0=dg-7+p$[8xv}pHn=Zꮜ IQ'v$FYEHnXRn'$.yEƼ,O͜%{ iJڟvk&죗FMig1ةnL={EktIR0 f'v6f5M> xW7ʤnMhp?{+k_) gg$(b$uj Y ?QDpLo868#f8zNo1-oh>87&s87atb ]8=H =8+)28T38b>Ijh4GIh2˿sn$`4އt㩡z/z&*4Awo<W'ɿ=32nw<џsK0uv7H8w;'6)/o %վR!\H~d^ь_Ѽ(!~#<0gws:Ϧ[ݚt绽v_tp|=O+ͅߎ/*~]/W^woήÃwo_& G32_˟n̝.mqo7g~=3Oӛpp02&A}| L _{q ksܛeU?l`6qcs"np̼%;VṊ'۹Z0OLj M" ՛w'nyx Mhv cuĞ$;SNʊ_jAwϳN$ 4::"G;@+hlKl}+sjo|{L^C3 ~p:[f%hh} AxK5.s3qh8 .z@~韛'Jz7?a&~߿wiC̐%Hҏ< g[.#!t<,z1w㿢49c x&x 2Q$ -{(X |zP 9LK8b:Zg$߿ej2 Y>!sc{vqGkb5{Y ?R߬gr e}湤)}ꄡGV=[ ܠWbm3YP3J^mF,&;3\;zC9E |[|0x3gǣ,Cy58BK3t/3 3$k)+@)&уLѓ{@1֤40.\ &/}8=)O\Y%`|+aI@^B_B8.\< ]jHKOL8i L[!LwiٜL'F)tGcpM/Dq.GH^+ڃ$t(Ӵ$7m\V?`V~B]v(lPl; PxPL=+P.s&sw4J;V¼. 6èNt:#]2bV LjuQe!SasA;,юL!ޚBtS":4\ ˙Bt[/۱"maEr̋Vo},OO*/l=oq8M|: f #c&}69C޽2**6F+8v%7D7S jR bp^QPWS6 gm-h84ިU^}$<_uY۔|]t_A775-a~d-5; ce, XyaC$l[9˩UW6dy@ǡQ?^I5cMιcI-օq'#G%u&k˽eβr#fbX=㧫d qfC(c=bM޸s=(. S2`"ց  ;w4mr*P_Zˡ_v߉蘬E\ȆB 1s]S|!մo@;\p߱n.NjBCb/iPFS/ĐݞQ91RR9[hD1iL0"jգrޜ;/܈ Κ QhުiX5~z+R;U!)hQ s$,Y|f]IִFYք5k:J?g1hK"Kkѷ؄6E:CgXӚ4ʳAikfãVrk +Իn1t~n6ǐHSD4bТߘ[Z7Ďz`yu~R9Hy۔1pa|`J;!~+.1ʽlwXC\aqP>h{B"u:ؒݽU#c4F9~^~HÆ0Di60Ε:phJs]mrWwP+eK ?Mkdb`dULr+QWʻFC̻ UGa`ƽa9gF$f g(UhCI#"e2!U*n5҆q. j:ŽvZqo`OmUmW6NC d:g.m̈́L6[+k4g+7~o $_8_8%L`'LLX82RQiRHbBtq̼^T>6TI>6~oGjn@+.M[)}x9S]Ovhr )Cb`*Gq0pc*idDcǬ(DX U㪍[9Jq8M%i|qLg%7&d?>YShN ^d_gK晿ȲӮ_+'%b$>o#r|saNI0;M$ % AJU:- $tL0"2q^v5˩]S/ },lV+dlY$Ȁk"onlePdWUN-Vo܇ x&>La Wꯂ T#IF f0c",ZFRsba0&E(0\@n]ŋݳ6XaH殴\rW\/*/+E>I~k0{G'Ԓn{=뀈X:o= ț%`}q}'$PK~M˵4<9f }@!00􆉾=a0k>]E]t0]nޝBfv [3m$NM]>y: mgQop՟'O:&l}ðt3$~Pƫ?O~?Ɲ@.:|zr1o쯴wœ./߼~˓4yE'Π?IX_y?f?~z?:rXJ:OioˋuorE/"y}_|;YD3ݹ ~N{&&,Asg2EvΠYi>G4vvHz7nQwRfG6Ӱ, y=j̈́w ]{gaO ^0Z#%3(NN-)3ucBٗ%fj%yv,GzgP^%3ap6rgy-dчojr'WYlv-9[=PEB<eȥO}=?~_iK@13⭻9ܾ(/9uxz49pʃ/OQIfqHNjˑo/+j:x:Bٗ_W,MGOE ([0ӊьIAOCw)SA9?vG'8 `݉~ϗ`g;yL鴙NS8m /2}f-{/q0<rvCy:Ƭ\ڑY%6[AY_ɣ(օ`1 qZ B::LXL0o(`Rns:@׻ sEދO9i%z|"fvk/# H2Bj0Xٺ\t`LkP;aNF86`8:u.r4M\(X@BF+3thGBXk&ЌKc8NZ~wP:e e#%06F,6E ޵p2|GZ]"b PiZ""6:k b'1ߴXb@?Z̹!OU!qXCe׈ 0wa6HJQHcfBsadaHZ%Ř2 \ j(+ *iU1~h8bDw2DZ )KV"EN5 KPb&gC-XA >uTj #(ԅ!Ш pA4wea`co0[#x[̢޽+:03GR(^bD]E|z,|d2ni\Sν~q9✥dt=~\8b F~hoe(o>‚bOɸw׵B{ x:Xf AJ(}8P]2JFUvo_ǩl>W6NRz1vp/x,p|ԬڻJXZ Yy5Xz5X rҜų{C n.a8%݃9Z%{CX`p2dk.:Hll rmoBaծ=`lo#-w&'>pTKAi[v Yk\{!,aqk5Viw>аo Jdure.{~?H(QB6 C°xa.,kdQ4r]t $ wb3ygtYɳgo;LHa:JդX%P\ČekknE:_\NvjjwkSvpi[v$I/EJHPd~hwb6q},nqÿy 5ϲ{oo.5bq`卻y9.=|tˉZ,Wp7_M BMbgllDEq)z]]~a:f tA[2@yo7NPR D 6?p ևZ>v55[]~3,%wJ:$xqbE~ouGS޼Sr#|jbp;%Ӹz2`Z~Idݽ%ᾮɯ_&? $($ݲN6K.! -!Op/\/2f@9l!,0#BLgY-V#Pe\ea8} υ`rQEqr!`k٣A+>CELEChVAy.SunpaCcܳQl{ aOyR!3dw c9]rp]dU}M̈Hf9}嫡(e?:H-mC?eG=y*!MNvAQ,pʫz=v(TC*=K j^<v4.bi`B=Ve t_O6N9eǵN!j5N->KqYmR!c?r2c6BrrFP Wwv镍BpB&y δ.(#]uW%R f?f-.-j=9epx.?fR平pVKΘ!\8a \S,p)|Ϻ>sU>v%zj7O6t1@&~<4e\9Y~vFWq%|xS/f/P9/A@Nf)!Ֆf GYE-PAݒTW`$uN҈v˫Jnǔ@Q5bЯcb " p- pSd#ԸwPj~,}͠߹|4 t2)˰a*jE qR 22kD1 ( Ę0G QM!BCf*h JYZ8ιT 0?Zr K*:_n$Вn+Q{7VT>Ah 7xۮs\5B)+nϐuI1J=?_7֍ ,.LQ~N`R^n#kK^AGRaBL@oK[hoФe2/Y13HX"h܍NW<7XCnYP2ݘ2R6:.8`p{r -%>v^gl0Hs}}{w-gv:eW>ncB7-!zJ;j\lymrzuF\/1cq(xwi  ~$%gU-$XꗏqɕGΨ$;W|:Ჹ\ \O'E:Ӫ֙bs6k2xw:܇f:;'܌t0Dj\+Hٹ"/ RvkEAD6TPuNWIx2 ֓)1t,__7o&ܚG %Nqd9Ԅ7R bs(C@)b'G&0Iˤ6&G`L9DޡaK/1N,[onxqjk u\[PKy*di|#) 5{M pUkkɀi,8óU#`fW %4.\=\X0vssU^,HJvkV/MRIE1Id&\A_WTآyo魧@EQ;Le!% {/LaB!>WOE& A0(zv&7Y?S3c# oCT9}i<Adů$;RHFX4O02 Dq CyzwcY3C^e{.WՄ@Q'!H 6J k Z0::&({h]ͦ 5l+`N̼cN(rm91(F̓„H!mPYrrZ@ͩ a9:];‘0v Ϲ90T Eˬ@A2A,'!v ؆1J5~a1ץT:\D9T@:TQwv(ɖH a1c+XR h[T,-f($kE53JquMcqjZH>y͛Yk !D?HH幟[YaJLyݪ6]=-qIqoKBr g}clQS[ghǐKY\p_O?Oą QǪhK>#CJ2˞2#Z"Ub+>qJ'Bhq"P!uNP~<{:L$h\$:Q F^ad#:R g&0 7\Y!ynU958EsksIYM A$Nkqڗl )saAK:}G!\TfW_n;oIoSafy-FS 8Fs'k# 爐%mgnw[dm*2)-A*I|mErv@7"o{zv=],CPHcNqi0(EWEħ46m[uo9مvlD*>tV4(=7F4Rz3<Q)FçdwD? O!hT?|8A~q!6~DŽ&Xh19F-dg F6 P8x퇛.-^H0RJؑj8e:Ժ}XQyZJ7N3nLð㢐$1.s;nŲ/hgi4;P#:q /Ne槄rhO+L02_x3j2C5,VHerK*bZ=L"s*&ݩbֳMW $!#tLO: #vz25&I&>rbpױY}çb}uG pcu022&("90fFBHpS'm$K (QJdB ]uʁ+yRN_V$@ F-:X]]ҙX f BYV(/ 1-잣DR!pn穎ЇǐRu@v-4>⩢*LЈ@$RCAhWQ/W!NhH.d%ƵH5 ¾Z=مG?UCqԓu4 (9:=Bݟ\dY_Ow\"xp\Eg`Yۃ9i;> ŖeBc ~["vj؈ K|_tL7n[4n|8-}Th^#kڻ{}=5MmLD۳-^CDr$3`$=Q@eLqƼx'u{]3"∉oĬíhy(hw"ZsP۽6?m:~͗5NưTbHH /;Rz9LO2۔}`FD _.匳|wuBckL]i+1z8Y%ڡȉN4X ֺ8 T`-yA $ӵq$_C$HyI'H8(if%Dpi4ͩ1XgiHgfn"bm)0әul}f/o`E=tf\yg$Af<ΦvoN-?!d /|zGA>cلKȱzIK I/˛a_mr(E{/_l |nc6[5wάz5?bqt>lڀ'ﯦ/+8dM? y>;`4s.6CrxS*)+sE3*2 !O}dUIfٳSYfD3PX*I,ysKlB*NĞ ޺>dng٭]-A{s*p$~T+EA4F PR80o G\MүߗgUbBGΧMp$% Lfʭ~u#ڷl|UKyl9}wLV ko3K{Dտ/e E7t ndPG Ra(ILX,#I υa.h quy1y?{NgW+KMÆ.&jy@.w؉[f/UgWh3<?HB]ީJ74B nGR6omXMqc{`:u?3p~gwvqF1@c992:Εz'߼ ařuyX90H `s*0 J+UNi%9 Zpe5u]>w%_Kjr5WM|3ӽ_u8^Z ۳«'b3?Ζû ^y0ki(t JZcΥ$00`C\1szA)1N~:~= g-gauovFO`\{y7V-O +M] vh8sIŠmEЇN9P~Һ\;1 *q. Cnh,QckB5m+n#0z{Q?I-^n$.W"WZvl %ٖ;Q"qZ[7Cp!Hsӎ a#9i5(990WEv5}v,f;*2ZpRZIdf`Fyܸ:Z8D)bqiF[JjoeTAkQ*vIE+1PN{HIu, - $v[M0cfv?dj\7R.z{aWgȷNf L2*6* ަSMZ\5/Ё>3lSQrrdHG'[~Dǿy&'"X&i`@~ "Z?a(%0J5}v/Lڧk0iwIkyy r|yB2<0rLOG._lQyyfy0=g҅.*Tbte) neb (#tMf" !TMrVFHIdFP"bWU : Y0WP´Yg;k5aG)SՊ*QW06E1>m2JMqӟ6,6[!D,)HHEZG!LhD߄M}.-8T K QJQsis?e6|xWGM|RQj=JMGYܣT4099u`롌##3RFI`ҚP31  t-mU+dU+)}H[<S@Gk&7v4t/63kH}vj1hj$ܝpwݸ{E%M~jzT',&qU=仉|7j{yaqŧ5|]Wi@+:;?&~qzI7!LL, )pkАLGQ5 Ś5̉+-h\CJ}Xc&ynTo i! (w}^+t׍Dp%'HPY`N0?fhe!gClFÃu?%rHU?:*f+tEܸ1ޗ X'ƛLv{C#SȣP,2` `֊XiSCeniŌJ%J_xxݬnNB/e<5@5f{2ʬ9Fӱ1Eag],%*0׷G!\k"ӆ/ S tD*;QA1*(X5 Ks W e|5.o>聗J sJxwgDF, BbT8BbN 1VmwA=eZFxE 9Ey铯x@72]n)}.~95-+a9%.w Z~ x4h3'Gjv}턳u ~Y5!HozvV?.[x۲4 e'Ρ DbĊ3ALcQa+e̠eZmIk#tJp)%-ۗ Veђru3o&Ef9S 1]Yo>Bz_,-B^2嚘Wv)^/.{A'+"L C bG5&XSWĦwAeʽJ$cwת^iKp/Domsq 7K~E n*A9ܳ"f؆^BSUo-%azN) X ŶP e3ۇtȶzU]9Zx`_̢4 #Њa-ʅ?" |}8jQŸL,(ZgzCل?w'_qWCW|faR;)5?6Iz#ɎU FKSqPvB ci 5Jk 1".C\)^VoǟA76z8dq:/Ś+Qǰ}c1!&r &:ccF`QT!ɵ\G]{`R Ŷm[3uiؘ- gk`%o۶g`Sf+s1n@1U TFjK6=G_9?0yP KX8te;si)ܖMϕB{o 4өqExXX ,8./ܪݦG=jjSIԩMA)'hm?#r˶qimϻM[l]mڭY:=d٤6jsw:h/!A{eR5eG"On| h{O?ӎ/ڔOnQV-j^;K88JVq!ya"BYZb#H9m"#ceÄU2gO\ϕ^Lꌉ7;v`;~wP}Qxv=ێ0ݡ@ |z^&%p [Ì'?sл?`݋¤yM.B F#Y vaOS}#N{xn/j{(`OPVs+읏?h_x9hv=h@Mn;GK Rͻ_^[FK֊LK˖E-ŭm [BVZڴDI?|U%`؍[O1>;V-ByFIEHA XUvYjx:ōW3~}77H4.N t'] T{Ia^#;_6.|O7~>i?FLq=v Fsm&7tzzדѭy _S2}sFn'=E` dw|lw6?f d7}:hIM*`!O#h/ʔ~_\aqy~[ {ٱs$lRAdJP[la7?iL埘H_A;p"9IӿIA;g?ƫKr퇐5;Pӓ\wv]':{4L荆 |,NF)Z0ȵzۀnHhগ7hwSP`-%te wAWq;vt\.' Cou{oHddG:msxSt'@3w 쇑g3E߮_[kw>> b}3ѷ}/M*#!$]ɮzi \NuRQbN֪~N맷m~je6>)>#E4އ.Gq*=LM׃`̷ɖ~~5.埀줬'%rS{/RwKKޣ:LBWreM'ՀKK(pEiq6SR?.2cd`|%Qs2gOܼh<'Qfbaf?e H  ekI#ELZr50Ԓ/~e3zr@S=5>SWL$\(3x?]u~T6 қ?zoBW׵(#e)/&TrU IF[}N,AbVta?Lg;#X<'P-%i&н ^Тn1aMLXDŽ5DŽ%`3C`RZh$JJdJC9㼿DʄYelDl3YgM{5jgǠo1L [Pi۵m+4*E1oof1 h;0aM1hqÈrRzP. $E*.mU+dU+}[d_k_k K;Ml,tE60NB(t+)P̥ E}T}T|>daO"DE3E3j ˔7҃_AȘ+?ѻSC=GnV T@ ·o>1btmp?x\F_Mk[hi;bMiɖ.fbm /\ 2X/c!p89 »\884 c"~4t35GG*ǻ?ur,cv{Rߩem4:VQ{ӫj:QYEn\Mbr{Ks{yiW4'%]' hR=geRnuk]|o fK>.}Yh/y4wZ3lرV8d.@$|zD:#Q@ph=ŕ/ne$eO_2ew~ DR/>;Y^uѸfW:&}qWЅw]mJiiR_0ñ&0<D6ӡYCD0!PBJbƘ2,&~A"-:J,3&Eم 8?w, qJ5hı`_kD:9(iaYh)pcA)4 1gK\ 2cP>ЯJ6ڝ7[_)nwmmYzRŀ[,6$C%rHY')yn<S߹ԩSXdREJօ}C؎,p=ri؀)X_&Ð o| %; *ZI! &P+tjS1x$2䰥NZ% CeCI9j hJ+6 kmBHlL"EQ2!`[d+%^*0(kGGABZaߴߐJXSdR6fݺPdCUҦSdUzn(Ag01b [f>^wp ##oIɴ-njjdt-_t[^lPN'e`Ir=9I EfYVlXMDpamYeNSϨuT)w2'pzQ)|j5CGg$i{?דּ+ؒԂdx?L8Rz$&MbY[y?/ʘ޽[6yI6^ l ExƚANϠh:pZe>y߀4z'߽D^#~R=x3?r~h0o|Gr,:Y07 r ^Uf3ф\Pه ;ĚU(b_y-)RFI\j7/^F 8skRxQ\K/?-īb_[aP’XҮ6}NJZ|oXuB7u[r_7};}au1( Z++i48$8$2$D<{x݊/]f vEeh(SHK;y$<6>&ymZ5G*hRX|Q#YTU䅊DoNSQ+{aM;y Cx:X<YkV*4wvxq؅@ 5v( $!Άbe@Cԡ`-4ƷN5".rgx=2@LQGm8 wNfA)+- , =%iwڬd&jC yY Qb8s 2-g8JHY 츴u )$d!@p QJduKV J)`RJmS5F&y,AR*E;F7}UP#f GIᴾdHH jLh0.K!3EN֍n:1w ω\ke;luut]2T9b/1WKAŎ}mJ ZG`Q1C*0H("ymQ{wח28KBxsQ <͎jiA&,rZM7?֡Qp4񡶿>VM󊝼FTQmiZ*f"|}K X\mYѩwOiU~12L·~ѓ-aKJaXR-9}c=`I|apIɕӯe)g;_􌴁NWshEи6&]iKU&L:S/ڑN؟QB莡Kwsől(W y!f@%zEh_u1z'pbpuR gǻ"oI9ȥ!b6o$d]W] :ΗX s5.Xa|g~17"^'_-qUcaFaw2,Cj=Xp8匁 3zQǨZW$X4T5nȘCƔ,g[ă*߻lWl-j+KUM{T|#GafmjW_zbgbjjִUTf+dж e„4_}C;bzG6wmEV&KUg__:3rN/Og;.Swh ?LP]1Z?Viʇ7ԀRčou:0⼛`ˮ~ A^w%Ep/8SkwP[l(bksD#GޢWhUV<'?iŸR%oAc36o[@3N(!)l[ˈZ~>&7ڷP1عm\JObydp7ܕ7㡐Đ tݲvIΨD ja6&C–yoTdlϸOy{twr"b!6eGm[es0r_XBgյȤH4Jjռa'C'a]fc֤dǤ'$T_p*.=^j#`6QcDA8 Ҳ, L0Z[B"%n8>UdS1=[ufWI2#0n)XM$! aRojWz 2AhH7VINِ +<u`%-8b@8řYfr܁SpZll~1k bp^t/0=xņ٘+,Μac;D`bc] ,1ƳpbհYӟWBP9"08lܴ qk[]xGdup-՜!C2:巟1 \I]~9cʏNk8Ustgyt<;Z=NG;x"4^-qoxCqۍKN=I^Fm<;6yKe^]Io~ԭ^<;+wNTu^~;^_|IB[_{YPqs g<"CD;Myf$m=zۢow$hTyro@$f":Aβnl]$aӥcтv"16jjI":YYԽw @|k%Rs̙^t8zRoѿi8JO^ZRinKٟɝKYM`HE`5 Y${Y.z}?7'Sc$TQ\&TdbB󮐟Y[ }y^_&t9ŃxWro}1w+I޻Ż4Ș^0 "#A[Xf>|f <2RH2UfY*O^V`%hj, T+jHFl0\e IS Jd b '' rQZ)I@#c@ kĸD2,r㈴GA[ҐR4PV3z7P^]pf\ 3``EyJB,a{@ &)ܖikLCRDfܕ0><5?Y⨑|a*Cֻ'B77O<8hXɊI("4Do3fRjIQs;Bɍ< D:%]}_Sv࿂?_GŜt㘟 [B]vk')#k8aj y!gȼ! ߋ ,ޒW]#d "'/yR(euMO(VE;ueO1Ta1.lͧ /" %/vFH؟_Pk- oϯJ'?ePU3 -䆁Hxੑ@Zٹ#;QBDVb77%HQps0zuĞݰJgk͚LG_,udK[t}cdFQq1(3[ X%7fܭ};2E"hSuB }5!9+{p1g5R7>LmX7 B}tpV=",? ] sb$2`Nq֣f~7@|({5>#x` Z 6L0c؊ɪKEKxGy`GUf|> =|{P'JP*L4DW_ݞvdV3=x)‰=w̘86J+n4se|̨>/Atn@>u%09`Ed(O{YL^ A~~Ih|A4Tz(=(wlx8u( pws19gfL{*9YYD 81Tagu7'i<@%==[` )$$>[{ H2tG}4Jus6ov^TgI;#u;at(! ]" tx1bS˞Iq%fwLgR9Va'ec@ QZm΁[Ae9`HVVm#-)04G/ h#c,4s .&S$k:SџQ@r&`qDF o.L%hr lEN9,˕?>7 ܁rÌ"Zu#eDEN@Ph8'9M3dМaQ܀f i4` AhKIej&߄^"=} +-v.pBXʇw9˰aJ"ASR"VR-A.4JF*j *d?Κ,/Y[{\ar-]"ր.IP2hL֬26i*1%-dFjт7mCJ0 l,(8Md漱P6&'Ryfa[Ύ6t "˲{qq>-/Ǒ]G5-n҅/Mx8wNmy2vѻgl|ahEL007V-u89p9u$!(bvv́81:r/(j7ppi0f $p8 ;..ć|gXDbTҮE$_ bnU$N\)-}c[iŃ)(jW()<@__`F[ 0@Żk+V\XfcijmTx 'к&Z֬ZVJsA9gt~20i-gv%9HFd:$ߧ!T=92J4o-0>1-unTA9yIXΉLJȜ8Ehn[av =R)# *].[f }[APIŸDЭF .áJ_Bg8?r=QTLGg,~~7Yxr7 knq{~_">]CwbG'oETF {;zxU4_..~13_B W+n\GDQQT=vy`yy]:;nT\$~4h '^s5b1rVu1#6j( 9(k1dȉL'dF+&sH[ x={ye=ZB [un]rT@+<#%qrϰvћxU/sOc܅'[G WSp`ȅP_tpx ! -{A:ăW˶'⸶exoG0 $Z0jf;^pD׋WZH[vˣ+-4RK$R AS ۟Tð͸cQnҫRmtklSᄹ/`ְ\S!1JKG 6aT "G v'J.ꮗT *T8j!NJ6mBBW?xtr*Ƈ*6zr.")/*4hTLf/.fYo?Z9X>s~7w,Eۻ[k ;굨\nERXR^Lbpw寂0_ GJ$ ၲX=w,~~̰.fEtȧs ;F鴷ҳb +I362%*VP !hĈNw4nG[O+lj>27Ru!!\D)$%MRI&e>_éV t~g/DR8:II/H:} JC'ppyZd]=Xn-kۄl[XV%~lwY?o HJNCD:ao[E01h_v Ar 528} urf *3}~/":2 P!Ы=v_$*DIuΑ<ЩD!̩쫬a!z`r]XH9`Ω{g{W5]Iѷ˧{gv2GѰfzTI 5a| c[XPpGU2_dcV:ܥ^N:Ξ]ip$E(h_ ,S<9:d% i!Q %E& /"[Oր 9NF=fJ`q/M)zXKuh ioS-g&#TuW[HtBdJ A"p 5 ߿|ϯ)?k{{ZOղ~au[I/z<)e0 R(?\Ǿt˪GnÛ|jzvп[fB h@.A-.o]pۥ#[XQ9xю >GWC[1E#ame'VL؛u9 ̾VB9—G:ᕅz:Gxmqގ╒-pRqp鴙drEKf257/")@oWT~(@D3|WoLqi=yf|r1{0ޛX_l-eIO>Ix.tCM4<_g+C!3R|͕xĘxĔxA M"%$Ч~(7"Ou֣90 `f/M5FjIM1p3H;N) jL)?+sHYc- x=f&%S3)p[@P|-!Qѱ>]y瀟 :ly6JE7s mZaD"O.Sxx1Q}ʢnw)OYO>.xbcFh?y`Kwąc*B} [- sep;5Z86\3q)s`s Y4BA43z)`FT1x ⎭&$78⃎ *pjGEb.DЎTÖL͙jc-41 J줽,ΒbUv<]CO"mH$&Lƶ-TB2#jCT S萃Q0ERfDC04Vg@Mt{d9[DzjYdu%[6>y`BO .rO?76b~/ds&K58yFqc*X׷S"E M023Dj8I2i3s L S1lu5eߊ8SE6"Ox"L@ss)Iy6a(u(YE] B:FlC]AP–Xr"c&5~]ݞ HaH{vCDs` `Q܎^T2I ΀pe;P b(RSeDJDsIA3PFaUywa1 `,kaTr{lt r_==rwbxݿ^-iR W_TC0~_!UbAk.Fx6_s7W/. Ĭ!]]D?ܬz{7 T*G/%Q.F_~|=wm=nX~ f<03ylL^60Xj; }vwbUKnçw5o]7w5M-@/4ڷZ@ZŕL_m_%v`٢._"1J]^o}t}u,f׽Dw8~olŵ.ڣI*܂NMvp.?r8 X޸ceύôK;AQ6=P݉Zyb=G;>Xl@$:\Y@'y̻puEHBbK&\]"|}I:RՎJZV`Ti /.m9ٚRNvi6bhן(-4nZep3aKnB`Ngw @m}w5d{\ )0jj!D<jL䒞9d9WqLG 2j NpFí7>3ޱ vJԎSc{2xJ,-ש?T"ԌYxAOX.Q{܆?yY4FofP-_䑒řάha;N9dPΐc-[ j^T >׺$g jQmSpC)Z)_0} bo.yEFY}%UTVZJ@GkC͆-iP󎗩К&j8 pN[#oKI;G,1Ь:'Cyyĭ|Ўw3h2o/NøtgҕSk1b3P_$nwuA8a,87A%m?ٟ-}&sJOqE3\眾0N$UX)F.֫]7-*]:Jx hCהlڀwl!{,hs>{Joq ȵh{1"(?R9=ICF{^IPI=,&(rPpj:"6:r”2d<&CLr/Sջ,^LU#D01bXOQHE>\jhQ9~JP -,IAIJ+!Vu!*H@2}9/ݗ}IROkAZئ6uNy'֥FYkkE3 No08j(0cbHv`ڵˊW9e˜絣h,c]; H0HFbcЃF4bYN1u JɥkgttA+O @Q,2kR,9N9c:Y1 N[G0ȵQ۸ M7&A|P^Eo{cU0ss jK ZD"Tp&e*Z{-SqK1ղL)#}Ա[1ҁ zZ\!G|=aPMB!]ϡ8n~\dj=2c. _c$x͚FerQ30AbQY 3*ԙDCF k+LoHCcg'ɄL(: Cw%!Rm((&W{wbl @;&%)C7̃&)]8pvJ-P%E"٥̶UI Tr5#dL37ܖ_ ޘ"_|JZ 4kӾN kȏְn@{ A㶂;{sf{]! 0iG3w(=?nnE"竇ӛw?'/^= (qӃ]}BSF%'bo'$HƣaynVfr!V:5s7ԅ?2H$+䨾7p&BSt|pV6 w>x"O#e\:ޤsrdpqE,9im@wdm!=8(ehDQy|=W.[5Z1zJ6˧8!S^9ftav1g4%t8F_Zft.tG`T5{0J;tWWp-ΠGLT ~2 +8Q_^s̚C7DzDu_6zm>u8*Vb ̨<(W㢔È:PP$KZ6{]a {ü5aF?r4~J +'/^:P}t 꾗R;+ar+j!r}?p9YEҶp0fbD~nZ gQ_iԈ!unWV5ū+%G\eV!h" j_E;5[Y6)gԒ '*&ٷc艘#ɢPb6W<#7 Ǭq9QJtO٣@An(8,!C:`F*- *tM.J_uaM+cc#RMf1LZ@znciJ2dPK..~ZJYʳH1rTyJTZ$7ha֒L~My`5vQWv'/z=yܝ14h$,ކJ|24^q:L}Wft%7a>9 >|tW"_ "(٤]3p~7ðn}X<|^j@9R}%5nVepoU9龳SAk|u6 wOm|=pxfKV?#QOSYwGZ$W9pxǘ!׼;f7ZC{ϭ 󖅲bC7xlwfûn>&O#d[dBvl=lIJŀmOv1:jrێQ8$; \k?e9!V7jYa`@`m^Yh׋%{ˏ0|:+"FuߺtĵWp@3H'~ז$F1uɝ~oˣ;-2 >χvsH{~y'lLT=cojf`2z\((;lK7w5 әUύZQnQ(}qMD&(YK8'QG(ѵ6 04@DSkp6rK6r:ͼfC9(fс$RU+#<kk儓5PT{;XUQ/ IVtۺp!X ?{FbV6YSg8s`.<$LDZb%nY7UDyqKꪯh5̕mΧ6.7R;7 84էGr?PO?k$D~ |TO-.6U~| xO@@ ?;Tn(Nq)4-^=tC.YjFѝ}=tw~^Sj8K LK=@jIJHxA2a]EvQɩ *=̡ЂJK76Y{.?hACA2Z'Z7oDԠYQ])gk7S*מ_)K xwmX| X9Jtt\8U+S.$ gM-ȺH޼@!ڔr((a>0cF*dS季Nh ]`#rJ=So[(٠cQ 7cP}d<2w--v۞Uғ:2 g҆-+Z{n Z: o4tUvJģ6x5ҪTQ!B:[KNԱc*pr_2cO)6[b۱Fi`f>Xr?tȒoɺ,P1Tɂ|K p}wyf}w9`WraDRI1JBkrV.R_0m`F6- 7%t5te.:Rue$uΨJRhճU@h[>ll f~Qe`SuUR(#:+FI$5yI 9,$ TT䬕z(Źb808wD[z/U. OAy@(D:]mERj#v 2.ؤ[8z>~\=-GIΦbУlWwwUZYEvW{^,sh4?y\+\.^k>O@-7@Լ6 ʙo,0F{VJBƱsdRIQLmEЩHX,x s2=YcMsʠN,لbxJ ɧpO]O[_'vaj}m&tw Qey4 '^0O" C\/]`U(lh@ŅC A/4=\@F gŴi̜4+8[fQ +#g hFL-U ]GWĦopg@ y~./<>rrsḪ{\U0'En-\Vk: Cٍ6ljޢpg}!ڧ8[}JԦ;^a2ݱ%?gs  嘑Щ4ImƸk*.0'k=|aKpFgda8!<5A gʊ  Z8hҚ-A9'>]x uD[nQ9N([#5!2hSuN^Va*[t{?O%ڂJD=x{7vr4 l@g_p\_<mfhj ;mbNJ~Hśя!~ӽuw!+ PR,l4_ 2c?2a\{p9_xd4jZ5O)/n3fc+yhqcЃge5=ɝ\,c9}':?Dn[EU rېYmH+1nntnF[+0k2g{U j ׇn|w6F7yzjGk=b>[4 $kbJ0NK[L&w:u;AܷSL̡gB}1Xvtm} G7v8= t@IKP(I^Sן`ka/Wdv<3>|j3TpF$@7 a HA"@0sn"Zg&|f/㦉Gͤ)yɮ4J<.hbE56.@?j]W'chh pgRy1HnݦMV%UKeB}cE*>"Ǐưƍ|&VØHcNL'F[Tٖ%M7aBMϾ @u1]nXYa8r@at@&9&9,9}|#ͱ𡤎fẑ::HL3!"Wg)d'ЬZ/q9cKeCn^byy΍h\귝xSI&5[N%'̉?N^j|0R?jAYlNY5sQV\UsևDzj5}'Ynx٢@Ba~"e.鏋Y8]v;~&Lߥ;tM]ŏ*(#so=e*h **?sMeJ )Te+#,?;ۇђ2BFՐV##e* E:Z =JUs5DGcBTu9uQ?G{/xϡTt!;㜭AڳGՔ}iOAwںWiڝhOFae UfNEy%t[NT, t\WWW/3QzF08D\)yE9vpw) Q-~vQD{[D؅6z\m/(xU.ꙝo|7^[/C?;(OU=|jHc>@ ]~->޷^- $hWS 5HeRL sFeGui.눴tK(Y,YЮiKh?-CM{:}XB)6Q%4qY?MذZ/9 6M7&Tjkf񅉮[1uBc` 5Wo%(CsnmkIĴ1~\=MfQj V1: />]n}%jVn0yJfuS;3RG#Ǝj3q+H$E'sC$c#J]/G̩R2RjTA 5R`LEt,|0pI@-Skj>e0{{tq#ʎ=@[@`X &EeP`$ &3kS6Zs-5"5B^7{)>'P/@R0En}ZFxAX,^ɔ.呈iZPkx?4@) g4rD)Jp+ɬK% Hu*1]Iši6"h\(D D0qA;4Y+T@*o; _%4L*% P'sE4?g4CDLRRj<;Z`zH+iarG7BK H HR$BHNiUHTiYJrR&X)=19O*2 TqVE)J'3&e6燉|u7jO(=WMJ,.){ӳv'CyO3@S}Wcwiވr=+Fy}q۽`&^P\z/@3*u%4LYv)s*˿x0gtX;tFa8 ޗRC"UgH?O8}R{3X@g?LI%;V03C~õ&a [@a;lkZUe+3e+Z?W7h~-DdZ^m{Զ6(<NBs sѐĢsRxv_\xI+T( mopK#zR[!RK*jNR{u6ZρX.|=eIJc&j ]QVCw1&&/ο1|e$LoY_1ex<x\d{>c!yssU*>ghdNE$yӒt,sx:r߇_/{SrryM,䍛M>fn֣wt|N bFwkB޸zލ)wt|}N9wkB޸z[MA}wsDQy7wKֆq)rpiZj.iGH@uAS<[ ڦjm܋\bF9T+9 8Ց ?8:S)N8 m S.vUdF!Ct4Fc~ N^ 6SiA-Vq^>xuxf駪Wc/.jkACIпzd:q} $]^jlj%>EF/umP;VFZsE'nRk;S-}\lwHh!)H)SNtԽ])ޛ(h~`,}gfQX}nn%O>R2KYNKjL%He { $geLr{6Q1U1v_̹DoEXIg4IL5JrF#>TOR1ڮܷ"_G }Zia3Wa \u{-FRm"$ b Zŋ>;aZHi8Cu}k|2ӑ5X$P*G #X9RT y&NG[~5] c%Q|_Z1C.vy5-m}cFW/Q"TME7z"^ ˝lvBDҦ辅5ԯܚC:mlM헺.BH7GeHt@s' ^Y[/Uf γݹŧuFFjy`UV%a*$^2L hrH5_fߖBZȠZ;gra8Za,oѿ^ܪ<7YQ>xIc5kfˆ6w_UFX]2b魔F8)Ln% AZ/5~v`}{+چ󴛵 5V[7?n:[Z;hTv==hcqsj\-պ2"1wg^ :sMK7w8}|;Xy[>噊im #r8# R዁Q_!Q?iTyWOϯ zWMrwCC֩;\u Mk NNq?4N1!%n:{wNeՔ7Ϊ";jJ/]b: \9g7@j`,(sH% BINd+Ls^ Tg*nM{ÞhpuBXLO5sp]ᴁ'ᇱeL?zƕ 踵)Q KB%%i&,KBONIR*TMx&0 ii\ ,MI\(p0U(L+^d #ruIXRvmsQ^d>O|SI^Ȓ*UzQ&`,S4,(2QDcd}\rW&y3V)TԬ8P)I3HPs%DH^TKJAh*m{H=8^z2E^`o,!Ogr/ny Xٮ w?Vb!IKrYu]O~*C?] 3 £_gOzs?/ d<O3@S$ͿYlsLJT_o%6tEU=tW=3LA ߓyvMW(&C1C&>#5o>o6L't>6Ix*tL8Y&TUg2IZiB޸ؔj8sU }wqu :oXz6,䍛hsnl5B6jDdWZiN4`3hgsV`%=9Ǥc0YQb+BG)5Zrš.&XY}=Y]/l藒Vojc\aV3^mЖBʷl.ܫ+n[Y#*&5Ngwr'<.mP?Rw}h Տ5@Wn1׏1ʅ>k;-??-x̙P`}t vf,9؀ TA7D:GK6ٺh® }'7bBXX:Z7? qDY"Ψ5 ^TPr)*XbɍVeICLuZ-Xܴ,C´h]2 l0F|)^^߄ΰ19J!68etTQ >.RJ$PH{8&@ŢY^|?=g^e#/`! ')Ɵl"edƛ-B)_ =$0F{ 9ͭɱ\!ѫ(+ޅ16pBۖ2D n{Rg.[gW,VVpIP31n5$,~eeWy~eGkȄ@NO ut#:8GK;v+CqE0]*ckqxነ_ّ$WvdABGGL}$d4ԐKkZIkY}5:A"&AΑm~P .96w %mwoymTQ^K@8d/* j9Qj!FfGrw`>"bi>ƈ-讚Ǝ1B T@ƈ4(+S!LDLj萎0iRmb'j}*8QRiQS8mO-΂5t]SWD?Nť~BJIL*l-;bFsџeTs 6i}4Dh@*a5 -F[TZeճ%и:j8=C&U3@E\D;bXfA;G8u}9 ,Ag$sZQ؋ň"jrqLy&p8偉O$UHO3k^,A{<#P&2OGP|b#xVB! M[-n愾=ҊL:wX(a3Wb7  AoSr߈c8n \W?Y-hlJEZJJnwԧyoț 24@X5$8Qnh ,tX.U  ǶnS" 9̹pNXI\8ER8-n  ͑F71iPk_i)<Èa![K}h(_bQY bXVc~|wd b:qiӗ 1XAk=MY(-ff8Hi8]teӕNWᴼoG%y =9RYҒ\S09*yy*|>GQ4h5}4Wz RƮizTϼz`~Sx~3ol%l-u]~2F,յ axnH_?aqO>/V?,Rg@`)$KJeJ4,$8SD[BNT3]XQ?G-sݙ^uh)$%!R'@E2S*MXR= #iC9F <@%= NjvVfML7_ J(|T|`NI*ڜeW@eW 0"r1 "ɱ 8#g ` dX!K "Md*BiA'8Sˋd3% +0kUQ5]bUmM3g7K4K%Ks`#^ >$#|~NkZG֊(BA(` zOx!1AC*:k}|򁀫c}y򿬼{fFVmчq݆!d#}_$z#&$F" aWa-u@TvNa{~@E&Z*!m@#ٗ6-Ư^>HhEm%oն.助1ъZT)Jl_GK"}ɯrjdg ^P\=.֖.]RX]=Wq47sk[Vz41F(1אP0%tӖH`Hu@(UC-UH᜷tBt kXaaWo/9I) Gt H$.ҜXJH(ʕ,Lj f޵5q#֩Ph\SlΞc>TsHBRNR2$sPehB*kr #P3H ,^Fp,nU|j0/mr5];ݻZֱ%%{qUaduTOaBx+.P*MBAL2iEG&RcPFְh%ejŢ7@hMErsi:n\<=H1ü7PKJ[ϯ7ϟ_h'&"3BO&&ct0O-.89APLF;_b&Oww _=ww4QYu1 oes,lg\^ ~|-ױ), -.}w[,5z';rIwKw.d2+ƏO7|R ΗL>d?wM?+/ɽ\bO=u>f ?}JN{:ݎ?=KQ_B-zQW@O8oƿsК1p$n?’g75aa-&3gbs 皀F pH@BatZ%:yن4=*NPrS[SOJ (QfJJqʑJr|qLJqD.d%'K l2(*»ieo֦ b8.MHZپQ [eѹǿ)[7;fE&l\"Y;u,(89cLGqr(m]C@P['rFʍ)7nG A0 Su(d̕ ZrpB2 5TY$d֟|@J[>j_?^T5Q8*$o%mRT2mD$)2.S:Ewg'qC "CRp$ZN#]i"lT eQAJ7cM/wJ] $ wJP\J}Ags\51e?ӧٻ 'ѓC'OQy&!{%c?]PxsP 2zh"+a;n5t6!?bZg;~@ьH!\ϽJOt1w5ʎ{ Utj Оj=VC9ϣĝ;Ϧ74K~%)\vSx8»bʤ)_.8{_,W6ONc2 7LݚQWatTLՄ+cɸ`mӧ3k "NqSB$LUŨ:ipJZYsg쥒wr@ )5x{/K{5޼9 Jナ+S:2udc 7 L{Ywh@[&VWVH _CRz CčݛC[ک4tIsOg7(Qmppry7.t}:n\kfq>욠,˨!)LC)0KPW]ʣ432v 11IQFFaaՕpa&2P(E!DFeY*$dJQ¸NS 9en QYIHKQmLD^e8 1JђqƢL1@ihRxXjL"$w"(Һ+kY ۃGguHiӂ/;m. b(E%vڅ^8nk>HD7`;+hM!$i<=Φ+p,|^|{|39ֶcK@Av - ݞΙg]Kxרn6k5tV2mcw;z 9\ };=Fэ'C2&+o<[ Zkn4Ev"W_#D1OZxzysmRJg,sJyjz~q#2U4UczWΟ\h_Wcn}V>6~nF>鯑4VhZ.]`IT^6nu{^:ٔlQ[EDAun ǫ˭m"&ѯh .Yi>!mmg觵!JSXR[͕7%IBCeސI.(ouFl@:DTWAZj(Ntߢ¹ܱTڤZ)$u:E]+u^&VKSRh˜qرӋvjj1@ jqm֗VDS_NRIRdeI'kHiusK -eh!7VQm3[,QGȄ C,2CyEh 4>mlf@BJwH/yGZT+K}~Kj"~[){ &CR/Tn.'Cdz}"E-q ^Ɵʂ^w$^u;2S +wIЖ-΀"-:j"ΨF Q}ˁ5\"EeM[y% h k )ڥu=hD](E<=MҚL~.}Z(ay#H,X^ sy!e@۟.o̦FŜlA4ˑ^A>sh(N t:K32Y'y8ݸtӍN7h VaFe)0aAD"m$Pq(X2;9W쯫;{%MpPEW%ɵ+ݨ2٧tQRkf Cðv)(}-kb#x (JSHC-Q4δbbL 1Ym ,n WB˂'V]@{ŇP*uRhYcE:ѼQB!M2HA5id"Ly ̑&&5Ҍ&tYE3bl&Z)c) L0a1Tr-剖ŰЊ2Q二=E!Il0b'ǥDs&vg׵`D[Ȼ 2ZL$:N}+5rVbPeK}) rc6cPMxV1YBٞ2$Vv=WTuj_\aZ h>\kOT8#(+Q. wk` ֍nR @K6# ͑p.ӛIrxcQEHSJmMZt*I(2[_I`p9['Gw < `RZ1qPFXͲ2h @󢽷4xwO+-I/=ܞhk}V/Eb:ZքNlWK]-FqqkHŸm)0iҤݬQK4N=(շ7:ˎҢ1W%%.X^9*U4p+Kv۝Uޑj&J= n!`GM ~c.hxc5Jl9zs{~-nw 6%5:QrqpRf?zUv\ɣr\}>oPb0@*H c혮^f>mC]#REJq*ꐽ^|h)Wʴ]>Hϛ5 w{=-E_ -c)y=&/w#F֧ ;FOktT Aa=\\+$ֶi_dldVpͅҜ#.@u.+Ȳ+>!779}M FmkDVopuHxܼꥠ܊^+[ Lߌ.򄐠Eao )8z+FH_j0-n>AfƨWuTGDPȧmgg5$7Ezo _+}ɺ{ҘVj9kј^vɜ夯C3m*(mǸs%@YL $PR i&͂w=ߣr}4r&fi]=VH?%Te ȂCG2mY!;ºbǛꦕV*d ؽJyb&i8>fp'lއ~cNzpXE69>yUѡ*>O9$G kdWEҰB=mE)$M{V= tI Tt&6==ͥi_qJ/vb5ZHU w*,*$e*f3$u{n\.ud3,U AFDh0L tgE҃l+A<]VЂ%ОO]+ՉeDn:[s]Ѓ7~ёiIֲ6Gy#z;q 8=:.cr7U"&TtNZc<nm;g7FF!qhC`nCh;`鲒7Br~dR-9>ލ>ϛ[PC+Ԡi; 7f`ZRE.yKۭm:.Ӆ՟ `DRF]6hW6Q,m9z eg_mVmpvة\#iB&OE&yY _J`A9Ϟș;?/낐.~(1$7~P'߸߫ kFc d TH2ӑ*D `i "/bsByl+Pٻ6eW{z8 8K8KP%);N~ېz83\bɆ-ꯪkNZ0sP]^׉XYuaX_{_ps`Q>Ϥ?nQȯP/H`2|ٷa2 dZI^/ BPm,p[,0o}Q3CP"3?j8q'GҁID >Tbs+Uf<Ɇ~-fniGW=I7пM>{ӻPpzƝ-0$nKNj񏾷 MFJU%ɎȬ_ք^[u0zwA3ØVAXNNp?JlwTm/WuOfԫWVV~Rx)i]WcgC8A+˫s $GK I/4)LLYeϕ;cyTJ0`8 %OiiCxu*Y:z˞ (N<0F!W}y׹Ѐ!~xljWv(D6T &c..ˮSi12-=78SmL` 0#y3UYm|u+\1$jBh)5M52[l)XɌ  Xn$H IbsZCh`r9C@"Xlo`s3<3pd PCQ23LQFMoIAG;i]P%MH/P):#:J2.=v"JpqX0Cfی*n״PqÖۘD|poc@e}1q1[=(<0ɛx3G? t{ "he~z?梗 ǓG# >oo;#3~=|Nҿ8x>XF[䄄$˽c=lwm.`,i1ѪXD/Ih@As~$p bxKp77Zn&lNoתd0Y x p (ӌpѭSRPAh@u6y5@cvFcg@|jI_0%TKXz+I2Ю̔+.x$Z,Ҙd!罶BUURnZ`)y/N2*29^(%ƃA9`~ca!) v 8(EUд Y-i VSK3+ʢ @E x @6|uh&"M] 2!Y!_@d*C`Hf eH`` S :h3y҄9l׈itw΢2Dؗjt(#=-Z 8a"i-|% 5협]&  }#5T$f+R_(Xi)?$Q~E]yBa:"Ve^^["B1zjtN" BI 0a@MeS~x  a@`BvNs]0nhQkOS%[A7:`s?WƄuկW]>kGQF ( 8fΓ 6c;OuF1`!0Txl0PDJ],xb^x<%+eL+$asN2AϐW5A1rFfu"(p TႨb ޷nՉ-] &Pd}Ib}]lb&X8 JCB};Nb=Rio] F qI߹j3vL].)PLUh\wHkHv)`E,o2Au>Aה\̤1@j4VOLeceɔ&&yq퉩; Č1's? LsnY„ *dIh()],]gVDŽ{uX0c39V`B"5%`%$dxTi.usLb:Pl` b )ʹEAC~Aò㇊(a%ԃxcQ ^Rscݍsb)V8ە9ku^)cSImyJP-SLJC>2]"}d%a,G~ ) 1ֺ)V+xKKAe++ S *phz9v=BuE="TB*KVM28}258b]XD<[!7.70M5膈FLV 4չ𒶌8 4*Y F<0s#^y`Jp-# yiLGS@8|>=߄i T RR_.f26ya1?[?oFCA4Ow0dӨ=a<BN|/s ֩f>iu",q6Uay,[|mDM=+KvBZ]noM '>+cXRhBR(]US((%gu\z}}Vv'vbi6=usi;_$^ 6݅0ץ~|[M޲:LW>FL_|;pg]) ]NiBIgHCbp4q&OP:ݺ5 e bq*)l >~z(\a?KVsg'gғ-o#V$!_6)O^'ojxVvkA4v;wN5嗨ڐ/\D)A/ѿX̞7O)|ʗ ki7YP5cG2VOV;vet7?i9>XK_xHº\nŧa6OC.Tf' 7DDLT#V#u9?W DL޷y"&猌?)~NK%kwi9vDU!'Ƨu]ky‰`Y~$+\2N8!'<'isr?9+rvݵml7= X").oOTk,[F^g[yME/u h:CtY[pt; P۝DX*vdMYV,y)2~jF)>Ui O- އUY޼}d2pG!VTrChȐb-KO-2Mx# Tƛ=FR B†i3vPw5[U??񧜕࿂?Xx$W6XxZt(CBFרPDw?=Ǫ3?XlᣍPE ia3S*MHVDuA:co VbVVPFM_KB`BOG~ &m.΁wN6U)?Sw\ĞR"/W r!:չ\R*_'%Fc kD}$D7|2)FU,`5}*^]ZWojohj׈e'`=r~'RjZHbugCx= -H0NJV!An_P:1ޯF{SI<p lú]({B,A 34Xyb4bN oư_`pA1d,1*`i]u8WJ¸Bܐ^N#he߆4dŦ՟ Fx m4E8o}xSr͵d,Zp * j'Y*R$*,> yջBb 57/( Hwah7|oɍW"^Ry#DS6rUHyh͂oqZj.NDD"] ҧ? e)F`5MFwxzܙrE' D &==?Иvu@céfdUG^GT2pY踕58DjunPp8N~CA nq!q ɻnn(BPy&(,n.G^]8A=.78D$@Hj*šNI;U6Qzl\ H[G|fx"N"ͷcvg!4 6PꂱLx&zBL YFgXCt `"D>aiq"s1/Ϙ晡=a$0Paʢ KϯYK'wN7]#(;E` fKogt/|(oF4Sqt3Gy\ lG3 ˘ J@rw |)]Oi ńN;UꢿE<3mؿzn.Oއq %ӐXZ3LK7YP 3 bQol(Ue/JhfΓL(lwIn÷bYnq3H/0gl#[ݶ$-Z-A2ZOE܆PP1*s%ɕIc Ol^S&gZv0sF{s^X"#xwXJRIHAPnB0U]Z f@mq5Nab%.2Mi ^{+FFHWȽ"(I RAO/u(XLsVWg 鐔P2~VR=Ayx>9Cxr9,tX_>׀{I^̺Z(8O&0Ȍ_4b "ق>.X!ꉿ[Vo_,ϛOx 9;g >H;sm1_nR J:mg~G?$l\c~k<_2pΕX\/t}pl(.oO}GSDPi."cMQ\sOF[YKSdO{Yh)o9hP<> 䛭CW;aK_P!ěF42Vn?n\B1bAS:\ g,^̖˃٢K3VbAHݔ{¾}8E넽 Qqp!sJK(9.1Vn(:\VHE4WLA㖎Bp)%#]?[86WpH8f'w>{*jmQWt{'D#5z04`r V A.! \Ib3U%nZI3^$8pf:\zvr{9H俯?4i;>\]]E6O.j}Ucϭ?_L٠_>1}W)T9 ǪE\VO+OK^3s %忄yKc!߸nS2G A4y=8NG;u" zozG ࠡwȡeUIż҃ʊFc̓"YmH Q3Z/!$DMHVVҜy"Ւo&ۙ |rfG4ZSA*rg?nxzSo& ŲBZ'p#3R cFmaBla*8ĺ,UοVg`[ Q{)_M+X_-g]gF@I8eT$LD^.3Ɨ^;HKB`H[(E9E6[Fs*`U1;ݍ͇ˬle}śnԃW..ZZ\=׋WRF@/]'./éڒLZ={ZȯU%:@%.>QL˗P9gOf 5T!ިMM QfV2g{{Yͧ9Gg=%jz)̏avw5_$ݦ{}7Kj2w7JNSrJ%yIrOV'0v7_Db;thMGZowrS.aE>Lh5*G$9BÍȼ\"iF*Qc`(.!UAUj>tL"KMrǒ5 hq A} z@|\Okv{"BZwCN 4;~7$߭zWHס$BmP#RGI4@G?nΟI- m%#gt)=?/gdح MʦgUBzZ JL]#Vu[Zлa!߸ؔptkI4go].>xy)WV|K/_r^SI[N TKL $ɪٳ8)vFd1N^X"#xwXJ@ZBr,But2Ϸ+}|KVY ]5!Tq:P_mq&(%e\ꀞ$TU:pےxi0M3SV"7|u_o?}M6<ԏߞ|9r$J䴘'cMp\sS`\i17O+~}wqu.[rɜ~/䛭d* z1f۽3s_1"Ѵh(tSG43z`Etek%cŃЁG|w\Lzp/:i@_tNuӁwi!u*yݻ):2vB}0>08}>Y~+"ꎘ`!&xDz$q.Z0(fhsZ"FK4,3,Xə \2t yM (`NUF_P}P!5&n+Kh)ɉH6p%$8ENHXeNPe\po0NA3P! W*PIo|DIi\ƅt m6$ALnjс4pԜhFD䠋 3HP}Rt|Ev):fi(IzxR )2_*YCI@Z"Qiv\`+R2Dn咈sPBX#KF9EZIўsk3&GV1m"h@Mq*S奛*4G!7a]RJ >+3AHǜ(,|3Fjz|EovTMe|B0TioY : `e8jW5rCTs[Fy{ur("WK-E^ i)3 Tvc=UmX[pM^^. _B1E'dJHz3 G arnT#>H%\Ra`j'E]·ሰZXZ#Aut$(*XJcpB #e5X "9"\$wE- ] V \F"&ʸ[a1Y@ VI%z5 *| :k5!W2+fF:*-]C`\sS9ה$>[  |, ؓQbt2ivϒ]gN?H@2լ'9a]"*AA1^ML؁blZ]\zG!~jVAA"rgJH ENs̞U{fqd7~ꙖM6;t-kr_/Nd&y.Lc.kE~XH!vSk< N0UBHT#zgInM? Nc=3TL#0L$/vJx+@yyO͛g.PH VBA⽳)ơ Vϛ7Jg%hm]RrL? Wr;%oeC~YVtqiBJN;ho/VeJ| bUl-wZYU1VS 5Qn,7 A<$Bʬz蟜alj.(ƈVq+aR6j nQcxvsZ-dk+r}cIMF#<ClB0.Ha* S(42( :;up6rK6A \A8:ŤeOGUinc(+KGEClN|$-V&EƽUV|_Ђz`/+D˱~#STL QQB짐m  R8"-ek{?oAbՅL O) ~ ^*3,Æ2/ϴP\.gmNC䢕 ̂I$UhtsLPיŤ+ XC>r|?{X~0[cEC(gD(+uݘ`%h ŵQ{&EGd33_R8w6=HQ0"F0\@P*G%d:X*0MJ~n2^&|7HBXԎ+A9ܥptT"P C5Yo3<>r7O'&_I?ᮏB~`:zkSn<SJCAPc{(tV}*hkEO*@U=xPRv%dETR*'۱ AqZj5By@jPccREV2sd өNX2v]mo9+| ߋe`n8al:sld7;?%K-[b5VıbKYO2e9 ISʁ QVkmYt$WF&3& IxDHMm,Yfճ;Q[%:CaDdHZn 'Bz 's=󵹎?܋B%!_&s~ၪjOyPzs\XBq;X=#J2Mi#h(lT^P,# jVPP4$Q깷$Xtѱn~j=oσo/޽f,\ͬdw|W׾uSic+zzo 35PdxFLǏ !-`tSH+ ۥyYC^mJ;z;FpK)Y^Ҹ4`s܌j= >4.t1Pe)bhZ-ͺ!29،+:&}ഒ(54ehYiCr7ΓiRfܱC/w'{׺s"dʞ(*q:F|!>I^閎k|'3פMkpmdzКURҲ^f2!r?5:H*ujA,% 1V$L*g% ?#6δ=y^# K1/ʁnE+G飾^h "q,U\ss-HG" mg4rp* Z>cT75"W/Lxx%S@`%9 FPVХqNAu6wa7ȿ_õ|bϗ oM|ʝΪ7u}CoxPok4]dmFTg211E@:{ )08(x␧ӗϏks柞W_xI$DV"m(k1e) X1xFqƙ W~骻7j#wOlM@suͻWc\(sSm)0ѵ{HHM_vUC }] ήZ?=%F5I͹CC~=9<<a5NI[A{s($MG HiWz݃#iAQݳFd:\ 9yA`G"ӲZʏ+L[HmspH\(Ehn543roP RsY )K1$V 2*[e( j"BvEaF*S I"5)K3.Grk;@0f41͍r;)H)mi|!d.Y_.R3ux%4[k<*enh@\gkoiRrW-gbhcw&b՜J@ hz]X(/Æe5YK"rk kj˥s`^{&b{4?O L>NVG28 :m/rՑ`D+TQ*^uRCpD54H/JtHɓQ'=BI (n_'Mت.;KG^T3xixVY#2}aR\ 'F+X'[ѽz=|i/%#CHYlTR0G__LyQպ۹[kiUL$ܐ,$' שCy]Nh2BOr)"Q#"6" )EJRIX뗽@0'NFG-BHΙ.Z^t v/Ā_+K^um&_Oi͘$8Kۛ_P(Gb#;rEM/@y? 8?7Ք &\PO\ɑN1rvį7CqhϨl } Qry?)}A{'b@ WN~;٢-!c($ q/)*S"A1|XvYDը0< ߟgO/7;Ut}I_=,wW QcQ96nyB9R|[4#ތ(1JQe#^;f22E :kR9ʁĦZ:JhޠBD:Kdr-(JJq6\ iR*e>BKD%kʲ\ Pp,̍I (@TT3c=j9}GhH\ _ZV13 Q%fDƸH"3_PTXNe!Nhi$o{ws$_`<}ŕȊ@Z=P'+0ZE8eaH\*\\\je:s.(2tS/̈Jx ::.HL%$cFH9ֵ}xQu54VF9+7Zჳ,7[1ےoE=˂/\ޢyl;O(ZUoq }Z\߾T-#|Z q0;L9s Tr ++[Ϳ_=`(vZ-9"˽Ѿ W^AY@ͳ~`~VŴ93J\iKSm(͉@Τh2#)#ȴ&@R=Rʡ 4 # {(]coPS*QW:C bf{,Nj YR((K֘)D*iڎ䠳KFrHmܓΑhd T{BL*KY\#< 2#~LSrYCY;Ry}"?PP|ǣIA8JTAld)(l7¶d/%Muj?ZSOlE|*/!z̽E9l.hy6k8X aOmߛ֛;[1wwe]߱IYW*oufȜZHN>5KE4D)9)Hbӯ?9Є%Qr.$M!@dk8 ʕ#ڮdUUei׮KƯ\1^/Рf~jqj<ėCΟ`y­oA$7\Ը#D*RR[`G6%0U oɽk{r gM;ntč75т E xu4N]!}do2GFJ!pI+V!w*fR´ҹuy\cd&p^KVmW!DJ=ds05,F@k1XSA~@X?c!rA~m/!&LJg/?S!+E/ 38#SHj!oIҵϟ*HpȂ. A;CDBJ7&?ÿ͘JY+CE ,EL\LXRW3Ē JU+IP_Ay(Ŝbź2PAڔ V϶3DsȖz|Z-{e Xw.##5bm-={[q W9J5:(wcUzOt%[X2DA))[10 v/b,qD1m?bmV1EЭ(۟2, mo(OZAz^}M%RV Жڝ2 p- UĖ`n 0駇>h|8+>n94,wў5"MNӽqݺjvOln6TyQQ^}(o\yʹ%5NL[[V['Fwmm~QN%JUݤMpRfV&nN5$jdK{zz_?>zm1LCu*ܓ}4ĵѽ_~گNv1ΩsJP"̦6cwreyx"By3&9G@B pS""?"Q9QGdG?Q}4pI:SZNO0\|k!ʇ1 MJ-@vqz5IZ蝟?H/(ÄCH&Hö-BbC^KEYM3ɪΣNL|C}@߮ߥ@dη/h=W!].Y! nRF`Q򹯖Y40'uUP\!J)^i(K Y*S ˥2d PfyA%vSQ7,L&iS:tpVKjRs8MTC,J,)՝ƯPeU\F [߻mvklu6?9HZ <q3:I) )o>Tsc_E|W%oY_~ c'1Su!f,u5>A}B T5j/z 8z/Xͻ[bQz!sT~Ò#+gVt,l MfO7an1/Z.Rv0RGGlzkY҇RBa8@F$rd=.`"s=nQN&JEmM}ah^4rvgbufzhonܿY> \6?*l'sftjUd7L'MD|8/֜S4Skʆ"p)~|'ލI#z7_ \L}xZv͟ѻn^6oލZR|1p1gn#nΓy7b9GB^zЗ ӧލ[Mn83zyH8w\rn,䅛`qTA?E1V俶MFq`& 9bND%ymҩ.x^]LL[]f*: iűCG,xo֤1 C`&6߈?>xF,ŏ#O y&eSL~#|(hڞ8{>hF,JB^s)v ٲKK:\(Bz hN$>"Zd~,z0D=}P(b ]a4c(W~2»1ոCLe(SQ#WPt'C.aTRE=gZRcѻvQRKaHmӳ]r<*k[#!Am@pm4I"'%LX2L}ZlVkci͍ >-Tc`2m`ߛʪ춮R_fs_/j1~ls5_.3u 94QC_fmik6s=[,wCk }{!Q49fyh=oђm=Nh6[\kDTcAS)fčrIz)+)䧄s26$"n:ĺsG#UH]C(˪ ׳GIpuc3HQ*LJ0`0ƛ1Čwأ/?Cy{H9<Իe+‡"Q6uRD@<=;N{ Id!c<=q# AGKǂyAxV{ {NG߽| ޛTX<' ܘT{^ ibR I̩GH]uQ?YoOlٮD% i yH!w c Ls$#"WyNA_XFF!z ,N#𮝯\oQkұBJGF5/ 哩衽q?T|ʶtoEݟng܇NT5v[߹;a)Cn6a%@ƅߋj;O Ā $K&撷AB]X#!EQVs L[sC+JJ!0V+u{wFH_g@⤖'ž8pR7Nzx*1. ,*Z}:a1LnÃ{nջ^Rn\^RNpkM:jT^Q0Sm&ft#.9AMLt.K珠<˟褘^Y玠$ aKy-GaH(Ͻ@^ޔMK{)F#\{yE'xj̣_P/u ;&~۳uh?8 @5KӨG(xsݤ7781 CۊDV\x/ve 5;8N2P83]f3lqέ@i&_aAiRQhX2Nȭ ;K9ܺɸ|ζOSzA+HmozR;1iȣ'.\Pn2wb5?!nqlv>G;ah XD0tC%7SDxk_=83OS/ފ'o6s{+v`7Z.]#<"KbHG-dԧOl.Hw?1Т$˂u΂nN>H耨SOSR3аB(ښwZ;!OIma}wsɵ*EjE Hyrp&aJbvs. v$I XAHTaeeQA9C4QcWE =9/F+Tό5[/3d2d_=lm2\jf϶!O9'X@P(z#UYQ`rQLB~¼\(ӆNR]~!|z7,ᬀ_cQ>߿)XձwRL0Aa)u@8GQԯqP>!Oop3t{mWxk,-J 0Y >dЩ0_]M&UK9xOJ|fR3Y 8 ᑙtQ33cfAݳqUœHQ'SSI);B (zk_,ץv9gG^xdd;zQRPе*.ͷI%0I|`@j`$UlXip(WY{X8冭gWOI$<'[*z' 8F`n]= gL_M'eEI2H(se?7Di7T'x VOW4a`34#wSL}Nr社>ckǩczvIzA.Ak9t vb}W4w&Jߘ#*7Xߪr{_.1\sSf@WjGx:=DL-d7՛:ߤ9W(pL>d l[X=ύTR10M˦}~ލIx7_ \L}[BZnun!,䅛MY/G0p[/vKoOujc7ρnM˦(S$b7_ \L}ۘ/; "w.*+EO2 9Zү)m@o}T@W+ML0hhDŽ,Uc`19}cBl NU)-4U/.%m[ d(_k&hmJ\63QsD0p,98Yq\[ +%M4Kq+ZZ,|((ZhZPGc pQ/̘@Kt45)#'=ÿ7I5WOޢ'C]KZgP+Y>b:_-qHot4w#!5 hx'F׎8sVVl^{K-d`N0%ZϽ ,IBTs*$'<h \D^RliN]0ѽVWDWxXmW_pҤJNEǤ(}59A^P Gw ѝa[^ [CTpڡ֭gD0P=d^K/^s b.-ɾU40`}oK6KWѐ*̐4TXbojXbm%{'iz%ͧa.sywy а45|OcaFLQƹp4x2tlǼmWԇ:dnm)MuȄDH!&88t0F8}\AӇ/%ֆx0C''MNQ/)F6y%]3VmlniG cgAh&)8"(+-ՠijBvR<Ƒ+zW? >W?IV87Cl7 ;w@\V?TF Jy>5J]2z64˚H{pi78 ULDcpN8;%nVa'#G≧I :4D5)08r Z'j8'L dB!)MY ]$UD-6pT& pE[^+AN/5Fh)vNLA*q َɠeY ;K}kё@ m)df5ʴ6! 4_fVjٲLwWc`zbbLlJ#Z"4@!2gwgX Gf .Y8ŦDnRoSh&{BGVe'#,Ɉ'.?LCJXJZvB_U:Å\͝kzb.^Vp*¤+ˮeכ"ʲB̈́RaDrp(NU?ƪU H u&gt!Nݟ<.UD$W N}U4<~Rz))J6T>41Ǟ~SH`v2xcQ YrmxU~]f/w~~񏨢ن*Z%je0mQj;YV<<}+KqDO[5HoQyK ) d0'8;GE), Nއhh<.B2.?<[{HB3J8܇8;/!`Z$+|i 㑄\)ughE҈ge-}zCe8Y-[. %54^ja8u^iGa=Uz+"WuG4MLoe +e-eWUxs{3C4W]# tBumX?2śxL2kyF.?c0(sCD5H"ĖB:vU-s+9.uve_ 2 # 64s)rwluqYRr6M|9]zz_gp#:syu <.FK4T.X3ٟ";BTX+OYE[B`CVIU{b }k4u&/R2X|N3:E7G AM'2p)8R.S{ (ņTr~oMXx)._ $ JN<1Ƥ*q:u¹  OXIMI|J]l}FKg6 NEOӜ=hb%,Y4MВs@+p J*K:* q`ԥRfmP3`q)ŷ.a˱ 6I'fKIv ?u1Ѯ \%=7fǪ/RPk*i(|$ LG_K-4JӾ<,aEU4X5F~OOn&Wy~a1M=!:ԹO`!/c7.<<}W Ҩӂ)R=ؖQi7m]mNu4XZO}*-Ywnl@=qIA[S bL;x&"tޭ9EiwkBsm`SP<@}1-n7ZB5zG{oa#`hjPu7ҍnZi /7>?Fnj @Z0^0]gi* Yք [{;stoȆs3p4v,1j{,ıМLɉ)#.eVݻM!7aGZfKte;pqπ#/V Ǚ,*J/>%cå&>)J!%8uPKjgx !+Kɛ9!> ;|0DF3Pl D uFONH:=#lܷH%1-[90u. f߿I XYM0`Z7nU=5 EIhcO gDȻx?N-/A4{tw#T] c GN_qMY (ee踄瘢cPx#h}B4 4\Ȕ{9+-Ti#Q۩dڦ5zgNV@֋/wV׼1d6J)o2x3"^i*-6yI[ 0Cx-Q.l1Ʋ@΂vQq7N!3 cR'@S,f]S9vQ<0! 0wCYǩ ]RVYo3T@a0R8UW$e4peJ'BZppa pE=Ҩ}#_w1]rȪ-. Xw˪+1 |1(]~Z iN<$~M5 w r` lQ*Pѝx\$[׍8|Qd@QO ΉQGB\->@ ?h|~" B*u#zyu <.F[4 WsHBܟZM-.Oy놧<^HN q \ڻ з )B IR\ػ$[y0)s b"p5L^[դ.oJ>6BB)LANQtl V=:(I<W\F-.eP06ic\IUY}m%bS'+Nx+qGԧ` G(82ƔQb'nJCbrbS!gvK#{TI93v3\UĴ}ʧ񈮙d[5f%L)ZKyBYRw+ʆLň!^g0a)r?1T$))S.q܊ [1+?@!zoQH9Y?4g8i/jwE!_4HܙZzu 2.UnxkѵǽM4bIKZoɟ;WoiuBf.)krouX07MO2ߺ7j1OH/ΡQ~8٥T)19b ^+ՅXͯ@-u;g*Ԑɂq5g>A>a1[(Lʓ7TJ|EΣMD?/DzR3#`%/2CM=_,Lmڡ$c-D.O"܇Ur]`*-C#+ h=u HdOٿrn2%hc :uX=|.OD%@f=,H%XMqh'fܯEIJ=3iohJ410D#AY Ū$دkO[@ ٞ`*RͽS%IZ**4|^lTiUQo|E5 ,t70C=71Mn"Nw[n ygVXS)\KZ2YtÍnٚ\jk'6DѤ.1ָr^#OhL4,oMA[0YSiOsb g!h!X p, ךr*(!z;,t)tT?oqaQ1ZTT;Y@[<ج fBqp7 j(e|ד&:5 񮝘S6Kaz2Wkn*ղZV,*Isy9f{˳nng@f$2΋ӝ]?q,9qՅ<$ZjM&ke7E". @?O?Uw6國՚pg4'qE1{)[ [%Ǖ?>goQ ͦiN,s?{EJz+Ѹyjs ?=U[… ~"ѿE:&7Ξ".κaBJYɣm5lxcB?77jѵy%Bd'Z(jj%^ ɵ[$vE}?[0N. j}nh$zo׌ F}ePc=﷭yrg;Olє,A^b+r @s4w'1Ɯ1 m)vkPW4x .(W`7XX߷)*Zk])M)RPP c&!\lⵞi+Y.cUHndt7x(RNB2aA{}+FEMH7EEk!)܃,H*B"@!T! T +CXQ/܈VA=1o`!>˦#bZ6>rd)]7R`b1kc7;נjԦ1^כEQ]DIPߐ%E]]evˡ RXՙED{&5z)Vؓ퓨 ~/Ot&Ef]ww$mLs ؞۵fqYq8˴Yݖw[mv%g%)*B)bPDHWeMJVɋ^Gule!Bpd0{<"=@e}$wjm8䞄4}5QL~NS-5+R,eK Oyt{-~zc &MO)z6=<_ 9a@0eu~h<7Z0M0-Pfo2dH>Yƛn 4ݖ zϖ/duhz.b+3bdy/E.뺂C)vMb{Ъ LRo.du(, Ok6,i|6ٜ/~b]e;M4+7{V(-NHk$lKg!b" )bX-qC(4GјE3׵:^N/WqcYqP&:e,{:@m{a+2f_}rb\ ]K{BVrFRf^ҳ}Ls_nn[ |ylH"\}16SԤb Pޒ+ZF ђ?"!* 2Q˕tsFTRz4)kiY]1 -y<-UnGXouMHiH ̡: RCx2ϿϮFnyB{q|dҧ3. ŜcJ.Ij&SnQNkOG^^\@ 4n(4y1)!nh2ox*=.> J[p[%C/vH|^9(4pbL7 JDiOěnT U,5ʴ`+yD'1+eX_WATI _zWnW pWtO7>{i 4ݖw۳g29 ӄap"nL!5 8pKjiJUA8W=rCǍ()ySFZsjNՌ Z72߇4YK+5̾JnӍx^CdPB#m`bR㻿ڗ}MצY=0Fns{Xj)GvW:dWoG`^rL5w뼰Gq<}cb!PjE0*♌CrL0͜nv⻛^Lb(y-2BF5> &4^_|XP'юDcnQVnzcۗ|eAU 8B tO|1.XN|@ rRBGwZgB/ 3 -ja΍E+ct!cg;qh5e L~~p@{v欩6 zj4)QȈ>Ĭf7\%G$E( ڴ4SP"B8sj)iI_ ^h(a8Hv ~٢ykQ cf?˿)黗f?se˟aժM_5^` Lߴnqb,fXY!.hp:p|dIb絛\(zLi))Z"%tvGh%ZcIbJ*B+gqB/ xhBL7۪z2GrA)V͹ibaFo#Yy0>Aq(B3Sda]3G jhMj`ˤt o>ay΂[hŹVc(ԛTYW- E5犊Ic UۢQDWyQrp@ 1kޡ)jw`1i\HC(cu73>nG>| _Ja'jٕE!ыapno;%ULW*>p~Z PQȵi ck%O^1`Շ]j將۾=T_0|H>Sè3{5}Qπ{.zh3|-)cKCq(! ^l>yA ,ҧv筵W#­WKd,Dc5FLGgZ`S驳vni-_Db3f6dޤn "4tڢ^@~Aa*lPb5{@f¼o5S6~O|xNzS95J4޵=5{h7%%{nD JVjpJ .^V-*d/Z14)&F+Z~5B)37n`iw{1q:=,9hdKvIcwi5~^ ¥zn#O⡘h5^)+$ ncdWƶ #^kѮۺ`!Kqg#D9f> 'Qڪ91Tj/Գ]ZWYi6Sd=ݐg&?崂j+?ɏ/CtrɾPߦ'w+n" x1icͪp6VK5B0 cqWoDO ?ZiwZ^`?ړ>]B=(POO=bqw{/.fMFfu w^ νN{OUN{7J ..h=⹻-b7],>w6=3\_/%Eb2WsLE\L} A6"={w%14z0n{ZRC@-cD@ u9Q)g+j<^b8ԗhܫs J`[<;֘QQ>y-ڐRY3j}YhMj7dpM؏O%7ү}jIk4MF܌(x#03ִ50.hFo1@;r4@wڨnP<8R[Ԯ&6e!CLGcTRYk9N׏p4g yk4[YJ@sًw`u:0;ur}p -D^ʙs$vYe9 1t޷4p<{ֆW7pnT'&tSFGVpBoڮ=mw ѫm$/=d-Ffr3bjj \ rDѡ쯊_(r&G0\5}#=z{/e\@rKmzp(ǎ=xLNy}|tJEgN#4=C)qth~%uPI}C$3;bq1YL!8)ǚ[]+jl[S o7EJi#T@1|z?)MfJ~@S7Ŕ4_eRԃnN,nޫ،.h3^ fi6spT֙q~j}tr`^1,oh؍hU}hC^9z4Y"^n GwH}9-ti,m9N82ޡq?I 7{l7;?Du)DL(!8)k )z΃D~һnzqY8| LHF D*F+ 9 Y,m4"T[ZŊ uqf,'Ga2Nr )1lA!Ƥ@

-v+\^Ha30`ze9`2R6l,2LUA17Sc 5oT̵ܾYUY*Z.5"T;~{q~jG6%5⌂8Ǥ$txX fw;ba`)Rیa2ou2oR@csp6p/3eŴ`T6ēXLꘈ2sNd2ÐB7WEk&qKջr ~ZeJ$ިXpj$s'.܄elxi'-V G>uc- l ڜV.j]yځ# ~_^BNL:?8@*Тw9L-]$)afcW20XX2˂IEROtY2Ӗ Y坑`(-Ifcz+]Tj" Չ_y^| [gY/N:LFc5VԀ.fXVscA) L2α[pp*o#ވ{4&ABޖy{WI ۚ5/P]7^0(WхZ-Fs1`]r$AOQv#!)٠ǣ摫 M汧n灨76nr؟~Wɬo{G۟B: !ͼجFq:\[* \ z_/  F_! A„ A`}AP]3,>25ACg _R7j- jk.H-柱01,Np < ?)NćU>*ӌ5nY(wnǿIqd4hGjLL2`=El(qxϽ|2%+2.8P[)egH S دf.ZrӰO I?p`RןvLk:_ke'Vm?ZB|ZUۥ⭺_=Ъ(BKOKn)롥'VmZ}Z;ZeOj(ZhkokȧeAv?:,[[t o_]uwIh|3i;Na V2ϑpth<%8ܒ.^s.ggq\i[9OAz,J~Xr8L ۽?h*5E I&&^:`FZӘ304)i8(Rm3v0!?z۩B60K1*X*>h{0ݭwOcՊ^RbEy kݷg;prn5V f3Szqi[*Q9ƝNIgAa~^Ӣ3mVAvpa:H X7CW)~RG4Ej1 R"LSAuċ4$Yse2 qhRmH2=4@<Œ$ǔh?i[uo(<c Be}p +w. C({_ſPΓi:?~~O#T\w"s 0U}Hz|ebb.7'yeAqW=sKǨ7T('aoFOXRrZQqAPgl&w8^Ë3Yd<6óypz=Kf]qV@wtqYO{& 'S/|]VޥrYR߼:Q{zhޟd|5Xh9~M9<={c|;{inp:FA3= *RfNNd񒝟v*I~ 4a˹ UV2>D@ q.ݹI?c% h;^l7J%).lPL܂`t\#'_$o࿒J"y4x Gy4x  %*KCHp:;y+e/2R')NJfϗ M/N2뉌I-2iäHE2=4@TC]Oe)8`MsAQ`5m!5{LMOl:t1J]|2SJucٕėnz32⨠C ?n`(Y#T%ZP{:v+,fn8SV?! MFYꓕ @3ȗrfp#憴X|bSןY0q1):̋<4eͼ*xTYc1P+\|:[Kk*B}re0hA V隇6iZ ĐYKZϤ.zN6y*Tߞ&ULXj.Ot-#(Smϫ]5UW\ōXz\˕׌v;L:h5v )#EbO੎Q7%N3a# 1JheiДcif؃ 8ASxV UDW;O}]HqYiJWDK'='f(x'ކwL6]deX0Z@lY\oԐ-ih|”XJ)lxLLTo2r 7Dx&.&y%!{| :K4M" :LòMIrowL`o>9}gݽ~+q?FzX?%?nչw>=/|G͞u&gaV&>Kt2&dgbY_;{7If,QªϠKzy2,anz<9yyq}vM`>|Et:CӛT^~ "CdtWdz^tɰ/ !4S6Kɑ|u{1PM$qsWO~ϝ;kw?W`󿘭w1U&/OL#2ەcy'iebzm89x|uqA$Ls`߆ֿP_ɕߏ+c-+ju:^҉3W4vζts؝9?wv`M>d}7r42:dGSz}^?OPIf6'Œ^XFݻ 6$_lT}g{ZLs{WƑ&y,ĆC}\SCRZ;1Hi98YlXa_UwWW1og.vGk(14rbro3=ry{WEAQ7!OF78y,L W_?jv2+4PhE79v *?WtSay7,_{~zz:/G滗y0 2azM ҤOJog)yrLw{gom^bYxv55EMA'oG7aJϽnmX8tpfדIɶ,,&%LNlK\?ݼz zfIOEȢ>Qӏ?zlt3 ^M?L빹\>~}+UOWUux_ke81YrP"4|s>xWazA1Jf!E?xpщ*X!=sZF2,oc,e>osjy?uP8?4ޠ~xK@Ztfj~#.nj^%G%t:JՊױ9f=35nS^TO=*U1Uϫ;qkZ}}FHҝAUj_WU*SW #7)+Qu;p+?1x'1x%%%#yr1g%Pt'w}Omz/U,݁5ϠVsEpD|rsPh:L9~U1/|wB'"x8j^tT!QOֽ5vu$[?:^[(9Iww{-¢OQyXSqՓR55QWǘZhI>;~0qG{]h1$|$ qRC!#aBYaՓ `)fpM n^1/YڈN/[eVi&(aNQM|B ;n>At݉=IWfIm݉=(G+'3it؞StoO&*V6uƍ$Zք6V.a_'7Y7{4o&U2)0K@m&6yVg 2/"W0BS)aZ&H"eBĘ"L|l].ʯJv9ՊױX)c1cAeuap+3Ȝvb%da>-(9Ӎ?6z q{Yҭry1zvu=Y .bùu 8xn׹wJmuЭq6\dw~>Xs͗Mu$T2"rSZx0й*bdwq-n ~s&Jdfg_$YE ~Bm0,軇f5Iztx[οk\F̈Q2c5W%QYC@/CH^p> :&.S")i""C8r$!/ . 3AeTkNSu1 0<`o,}*)ކ/ӕ;wY c[⸵JP 8t{њ*ҝteZU[SR&fE7:01ZsA.$} RׁEe2rZ-]gStIX$܇].d&#gbl80-^'p7VGsEXg䯮 IOy4 oڭ ,: hx6f7o":˿\J_RJ|*KӒBdhQ|)yO}u^9Z(tZ\13֎lzrK%Y c&Jɛ7],,S02dP,j#hWa}A[ 䞠a`4@fMP!QxV, J/}Tq[Nua]@iyҶuOcu{v~Չ4fvGfjJ=#]pEۧ){ jy^!o܉Vw :1))$؊|dL2%Q鞢M)'!}ה53;fg:&!5P4Ji鴊 {_|$u!h2')yjfyurSKW)|x-C+{'w} 5Wi!rB_|7p&{Ӱ(vjvgäɀЪGI=Lut2rLA*'+!ҞHG) )"P>\d;w0\#zo& *;o)Ş: K Ia,Di9YD2,XNZDVP%mN{iJ:!U.cXŵ`*&-U,J\+qREdэ,".t&kmc Z\zK%4 *ʌi4f6Y=6li<ƁшazU T ,0 ^J坓V8%0HEG0Bz3ZLAQ"DA!2,nZ =IKbv5ut>#0̓b2cΑ ))ODSdSW%§:v+#~ X4]wy uGk5"Rc)"N :k N(p Lqux3.u7_=AnA]Eu9P۶6r㍄k9ZIW(rV,j{rN!Bhy wIDF ihe2(uڕ^T 6 `!4 ܥ3 XYD6FݔWGũC:-oUK$9H֝o$9HA1zJ 36HCd.zXh-4p @߉|35B1C\kuT9DJkc &aQy6Ӛ> 鐉DA3ɘ8S `""Rk]zߕu镊ѥd^Va/cmue=LX}D2?p]eFuw!s܄*6\j|uqX >*Z[RA=p]{f-&"r&H qjmGS3?8R6t۶}a((x3J;L@`\?Sa\dB9GHu$HJ杴*0Òq%0D(k+0* )Pr%BYcǙgj[P4aR:KZR7~{Yhw;lDnrd/6%V4PfU ٦HqbGJUnWEīv$^. W}~quDzyI/x e)AsI&6 9rAoEt I*j%rQ I)79J(g//;_VOeKA}jR$S.(ޤP,6 BCZ(O׊jĮKI(A-*h=v0ߗZnJ9řzAc!n23j-:*ؿői鑵QYMj򏾋l_cQꁧҝ޻q Q˕Œgj)Ҍ\k3$"IgB@-,E̦lB&u4$bfxgB7k/ k.8md׭ kksėZ*We.=QeGlA3 :BEg[TDK(q!DZvq,Bv=ϟ3v* YfSYi`mqwD`cK"'qOFL(@S E:XhU8Q & qG%2NF%2LD:rF0%HaqF5Htb1~IM",1))Jj=mQӯVΝ~\tgh5Q]%g%j/R;4G%k9ylUdylV-*vodEx%6fs7M_6%*1 4f#d>Ds=^4z 7opj<6١Sw1)Sbw!Z.VieTxo4H*dHXfzrIwSMc)zFC%۩UΉ'%{,+x݋`~u=sNiaMJфmJ6ޒ<*22k\Ath bf\HF;]׫xw;^QbF'yNXdI3t4FQF؈B@tXu>AZ GjQڅӸ5BS RiΤ0_D hjq#W1 g3D,Jق(m-X XbR8a'k^0x8]$\ $7V ׋G¶L~|ޢ\ `K B9j[s"DIMp" xuULp}LN_ULl>h1ZЙh҃3V11$R{ P\V]ȝGf7\l6[^ﺳݛYJ{j(PuNz7p: 9,cTX,b8Z$Rd/p)Ehx͜-+ZF dd}=6;@G#ץ~&KtMiFs6٥Zۤ&^"L'ͮ^>DQ] Qom$ٿmvh>5;=6!{#!wIQ%؎ega)l%x'Rd1h(GQf`J>ts\݋0q^*$ TwOm{?t^ڼ KкG7RNwf`F {NDX+Z-xdOЏ%1D$&(M,8$4$i2l̕k`j R}gw`+1{yt|2](C`sKД^ .śSxKi\8 pjGHȄ>=.22rQ}L$$0ovI&p"QZ+"1QEiŀYeL9PylROi)hӺӺ!e1ƛ's,өjҏ3>ap׹ Es=1gz~8c(tw_0F?sBbbRP B$3(:~ W[q[! \ÍkslE(Ƥ{ә>k3{|0wR`X%._+Nbx8)V>G Z'JHtDEwߑMV>mH/00*뱹LBVJ#O_+rHuMw_#r n#hڴt_tGS% *:<jTĔ%A" a,0PڹՅa^*y49| {Q  d(DH1&fRZ3B}yy"thCwhbruSq#RwPnj}p4w}gdi[bZ 5GfJ|!I?8Ԓ)k9oftU}R0va;G7SW|&v8n8;NͧfF@qKܤ(w֙lTMxx*R6}ukpH-^ J22:! [Q߶T6>Q5o3L)̪nAW|My@Zp.HBv2;,UD4^sCc/w SPVEBE3 ,<9+౉݌98j'2t͘gP cÓ`/z/EYwq?ÿtio^;y/x?2ÿoћ9q88||s}l989N kVo7:L˂zrB&Gh`][xeM}¨R1F4%޼uSBٔN|j39Q_hہN ^F[ J}l`[i*CJ*~)!ᨘ r PeK=\xY[zwzw77 `43dfCh3qk6s UzZ9>e7[/_kXy|z%Nuۭ/p̷Ow5zw/:(ۿ>p}`#ҁi2:_y6?Þ7 vMot=V?+^.ZZǟo/۪u>z䖕~S z|a}n+kMlͅZݞ:gݿ93짭/n w~h]d*eK}g9½_12.R?CQ,כ~WS;zjG&=+v􌻭=v7:'.u| `&: "U|1J]ũ8=+OY|1[ЬMKAVu#y9uֽϫ0B2C/? .v(E0Y3A~,T]c77%n*Q;Jn"}%1eB>uu:(1F2 ʀ,bd,AAEP zn;梈KF׿ %l%IǕE,9cd֩LEl.g10ʁau,I 5,ˌ2>.xQ59VNp% >` NANI ]7USRݭ7Nnw5cRC T\Q["⫸8΁C43o1(Bc$t92Ic$I)? 5v,Oby lQM9(݃:>gޕ~eֺ`7x``]jK0Ob?FsWޠE1&F|"4Q)aM}ecN9O̊'9S/GAV^Dof}se(ɯ 'R4œs܍Dw+Hn(g !xsܭDnP8)ʞPGU#׈釥#q:bn^W,:]auSyE|8~TD8|c1Sy%k!{Ry|FL~=QJt.i>ewֲkSFSoR{g(?9}7vivXi>l/B^p,-ş/{x*Plt!e1쮄!;id, \ +H9|Q8ƼvnJ T* m4j[rr'Pv18AUJKN8y a$wGR RF 2.70}~i2$%5I2 Aja%y汚9g>/Ny^3~<{ N0d pfB4 )x}7. ugH}?_d{d^L3x 0rV>;z|3As3ڠniͷ /qn|]B|Mf8f UVĜ0%fc!hY{&;Uc8[+Pqg"V\B@dM)fřOQ_k-h@b9iXRN1,s fZK ivkU)v4V`}3JtPvCRŏc^&_cѣ1%Rc0xC}ue ڬ;=zQ(?Ƴ#N)il+AfcsK9AgU ־*Np "{Iy㜰8$Y lPJ%Z%AC@sf)&4:+jg K:O'a {]c,A>MwHM |G5I x̝Cg$ql!ˤ,pt/Z(^HIxeOHLYȐS)zE}I dϭݒ +P^;=΁jM3'5ƀ ?xs}=Z R34p9 C1DR .3hc(4$E Z)O#R)\ .O306E@i?eNLS K{F*`t;03, |rmR?oU骮zը84:"{NRW~ "L~Neţ4yM:jӀ3 &o8U.YEȶd;Sd)nq=.Z%sͨ4_W[G>-w^kKjhו@Sőm%p{|mXd|(_GT*`̀CBo1'+&Z怦EEвj KL g4CP;kFrvВ8ы^ MP4'rRCL[Tl"sEżG@cd *r & \SHm8=>HcYH|ҮȾ*1ZyQp$ъt9u] qmdcϤ0w.r~Pubq2cBhpfJ"cЌbƮCg !Huq2겥<.secoz5D]θbNJeQ8/0M@c젻ҥec'9Ǘ_}"ḝ/;Pz>&I_T0:1j,taΓ>0鎩6WFƉ#qӴ(dY#3UT (HzE&i'ի4qKf;{zi6B(0B¹?tS@΂cQd@o~Y\XtEQ5(F+9[ a,g^~Vu $\E @KHSDup|~p[1ȒYTos{ωr2itK0'b]q@oq]?onh m:Y-s6ξ`t@~!6aэc@z0ԞČ+@z2ɚ83:px94[s@qu` q^`P-v6lejWjJ3Җu@#[0nE[zޢ|kNLBH ʓ,M+& 5 9Nvox94[~Fޱ8: DS>rښ(BіǗuusIScCinf[p{¹(LԾZɕ^oCg f v DSC*|+؆V~2Isa a9 i2ȕ̟ʅ2W gbyfùcsmN?V!iw LxT(eD2s!E$' SS#6D<* F!xb4 km2Š+d/shQ &/UӪw5X5@$ ^Vc&YC9"+TaaҠlǡXnۯlsug9l0@aߖ;s={wmR w7C\\ >aji6W^HHj4^!TxJZ Rs}P؂eO,h~Ίጃ\mAF{`j %]Ёh&ftṂ\锰]8(7LЋIKM|Zz|[~QQCd-&@*4VZ;ImGp^9.Z)rԂfTx/+UQn">bEuT Ҷ~=.R~k#"<$BD\YR,)Q]PwbmqjXS5O9kT zv'3fbК xI].U~ Z(= u1pGɌpLBU0)tL$7Lj3YgOY>B=(l1XeUV4 Aɖ~UJJ`it46K>*߫$E1VD0c4#żi7gVvS3^)gGIĘW4sƣOaOn'rkoWĠ_.yaQYO9(MAqZyDX42ۊ.K{uCvU5F8 o0IdlL cU1VĆ(8VB /+P*qk24)%栅jc fcMdA9U-LB)1e _n nk_BRHN 1J<˒tȝl٪]{ߍˢ wz&-fBms }ř~EK=7J0Zo -"巻)a?`PTiNBW3eڙgv۟͜EQIpM8}LWOPxG4%8q_ 4,()~ >K>;ѯ|zC'O.F<.(x[KMeP ?m[%o[]l|zX1<>Tw;gep !}jF,fxBM,SeQG=(H~;|G&{U*O!-c0}@:7=s껴KOuyF}[ۅ?q1XT>nݬyn>|A[-jj+fX.ECժG'^A61~'Ř`b}9 gV?>[#z=T+X֧k?< 稢kirFї$^֚ vep)nl d5`X)Pe"@f&G?(r)~OՎܬf*yn4,B5䆲#_%Ϳ_ow%89cDcءM)@2o?%S05fNTWK@'c09{@JʶPO€4`k*.e'|ZN_EגA:e$`F(|[MTL15 L:pҙrMkT  pFc{7`a 7.q6{s3{C^*>;o&;0֩ ;͒*3L"8>ws0T/S\,?srFlRxng9)z;R0%ÖN (ozq>{n^rߪ4A ꡠE'sa|ۭUٹQ bҀ7jWURH5F#IC2H)$$*0ۂ@ԑd ,.  ]Hr(Gc_ըsRY0_(A 5G Y"Aa1Xd&˄90ڔ,nQiɘDm 4ds.Xt>o(~^_9TsSbP8U!e,x~28$`wO2h=r0lN;⁼ 4bĚ8ЭDк#kXͳy;Ps2S ]Hd;ntA`о.-'evx,lת Su6W8"WՂ{Ém$:VcesՑHN;T9w/`/h>#;E5k!X,2.7ya#a`%|$Rx)pV \lI/R-uf uF,MSvh >_%`],gA+G޿w_w`:i98Ï((L`~F)Sj>47Adzc*U"QR-_UTSמ8./ (X'4ߵt>l<O!yH:w |OEXyDKHLPG0N9  w8*7'H@2AP`EY^p&4%!# `;&4.N⊑|LbklN:< XYvɵ#TbVT_d,@,86]"[ICP7&MXO=*!jj*3TuB| F ^Pj ֹڍy PO],:p%hn;n \![O1cIX&mP,Ŋ/q%Pg; 9ȅSgIay4ΰN <2BHhxQk;rE#ݱ=߬}l5tvŭf5/9IM.±RM]qZutԢŗU`b6RVZNÖ텓 mrنW!1i]c0ṧQ)0=Q(5A2$A#p9LX3%3i4%Lzg`ʃ:;$)Au>WߍުoUP5j%Vˁz)V@%QlpI!0C<t`(*L]Z˜t$ p =4p4yym g$,iPLFޓΞZB^V e3\ 6]7o8e s:)M {U$`8eҁG!J>wOW}04wXv97Ƽ?.9i#WN`MqjMg?IƿVZ'>PQ`0RϹ.c%{ T>LgNQK]0$8Z܈hx';{uw^~kJj=w BV/}8|9"΍M1合AZAS& ]/o2cHO!G@)lI$L1{'tA|1&IdHh mDxKO3u'tA1k;O,"?+V7`㌞|: ^`P)C16WaJ8/UI| #Xhĝ HL^"/yB qăU-HRM8%9nK:b `K|S=Z4|^B9ؽYK.o|VRo٪QI/=JʆaRۻwgď X[J#-Iвᕀ J{֖ ePEOmJ #skl_SѣQ"M{4;f̹hܱ x,煺 _1:2LjFge)~ؖ<wA?2Lj]}Mpܯ^yQ#ciި6[yxtt>mEm{opjkd-珲m[ٲ%LZ)(¼(A C됻VGcr.|"ƢS8FR,-K_֌(1jhX5̲#V BAp,lus1 8Ak]Gst5zpOk\ 9=xk\? ]|<PG18S_[Z>ͺxF~y5Q1ost5zïq I?gZp1GWq5(ÚJ1k%MO^,y@T{Pdg> \~ 2 r!h"R[C88G\/xF~J,5c"@ 3Mf h^,^ yIK _VTN1f͢Im" kbFir/3BuQoL2,BwQz6;fHl=(ĖV5szwZh&"fhy;j$I"u1#0[#]ӎU$v h*رFlBpu~uXS(\y׿BOvFU`Gz?%S.k%\:Yĥ]=*bՑ9E("ϼ*eX}" d{AgY_kH98;gjye !WpL aOp(A A;L _{*4^MEUg4 {OoSmRLk.{ oܬe(&ogܺ}vn(? }bAgrz,he~{!N_Oq; Т7k&5Ȉ0s&F'[Ւ@YT.zFtG,ro7kD_Ntbvx񏷋HnD6`P -&km /mpEp&~-yj$Q!NgIS\\ivgwv,ݙ9s3Q/W`khL{ ;[xŢqwa§;l'ZjcÖO?8w &!\Nw (kӄOSI'К+g5{P,JSUA$mnzeG1ʲ|Y0/Vd ZEZy'堢0ΓѠqNQ6R_; F.Q5P>NXuq}Wٜ_"DxÚ[zZ֕bVZŊț3+K6@P1uN^YG;i#Zj!J8#aĨ,)0ǕaEu]!6d77uǏ=27kPu^~9) k t<})Ջ?{H^9Qd":tѳ׸, Hpջu~[E9F0oij9{05$EIQ V)%mOJѼBQnU*4 Sό^K 破e)$-Yއp7Ƽ<+t̝ DXgUA(QUnnPmW7S>ݷ,JD `I$)؍S^Y ڠ(C i@#& 0ы( (}KDFe.cqYtb̃EKf>,Z 6ԏNnN}n7s9x&zcbX$#'e@)Z8m(H(47#.Jb^κjrw7R⮪EUDA[^Tv.)&pā{)8|E#JfJH~\{NMvL1UbS+kr.sƈ G:NF+(ǂ3@`""czaY`v4GXHιUg.cq3F8 ΑW9~G`=\Bx DC S8䕓X&c֚ƩK[9OWb %v,Izs:4|)ggcQH$/>hKVL:]!$wrǃ|pWn}|ϝ5B6 Z4.%gtݨ Fbd>)t%3iGńߍ:@JPXN #! =Ɔ ~"M*iJ d,AirY ۟"&T KE]Q>ISE|tZxQ\l_=ԂMl ,R G9ИQϖZj 椲#W%1OLm .rKFR9|^8pDi<쎦2!h%&˚r3A3'LT +Fȩ5 :cu2Jb:n+r$V ͛`it,m٨xX E  =mYƽY2luZR2W) V]~, >&sRL$4@ ϢUM(-+EX[CAPjO1h2?vk ̊jkKCzI*ɼD "AnRH܅zuWEHb޾_# Wbֳ&XI? qݦpOj ĵDz$xG.Oc%7ҏΫ\Lp>W:@SZ.2udg4b҆ur=ܦF$z.=1+hH3>4$~*d<- zx9.hU?;lGNxFgB,/wmoyǜ\K=&)P?Ѥ`~(S؜|: 3VgX￴ZW8$}Kg:fQuխqo@Mxl{}f>|ŘHR$$,-#]!>gTsiߕNwPaf?1''iBL$IsakZ2-Q|N@犄$1nѠQW?@fߤ%:FFbBA( qT<|-0*8\gS 9D~ i"KUJbr'U|>%$ .݁B5MDˬ rz Dd*8M:h aV <ȋE6 `n$l Yʭ<V\ܟ)=04$gxפ S:6= N=̡Ո ]\^ABƩc]\^F:$ .pv`廭>lq]|l[|FPcaZ]j q׮[mnp [. ϶|qڕ˗a8yQ?ܯfz!mtviL/ߏ.05rù {T^ )7{C~,SD#-tӤQoB開A 侣t+H/i*D1閽T6Y4=:I7OABBѹ< XXp{uP@fH7TB開A 侣t+H/iı[BS;gHnHk"q`Hy*W7zRw) VJl) PSˑn7&nN;H4=D"/&ݲ*݆9fd3Cuk&`jFä Ε9H{A-x ) B`=Ԉ*pob:T.3; F[Iuil+K(}%᪱ U܆]03%b xV-O[ G,_l yc<O+4_bS9VZ^Z9ޔ(ζ~}c}x{ƑB_n| n8/O[,_P") { FUuwUMWuyJ߹ɢ2Zgn-||o "߰o~Ygm.un J)5G~z-:= tF+!ߙ./q|&pmm2(՛?S+AHÌ:}+Ỿ>=.D˲@4戤 >61[-Il)yVs3A'4{B#3er?C^@y Cn@ |dʂF!H҄Q㉸ LXQ4Dj*ZR=30~5.(cf*dFK^ׁ)TcR!KU^GG/h]ۓ㝲jGĈKhҺc+?jQ`ਐrH&7*^FkX~\L4{4Ϸ, u$G"i[_+pwuFG3&MYɨ(c0bѨ(Ԯl#bnwbl]pemF?R%5or䊙<}{U:A@~P$;SX(r~{@\/:_j^[=ȸ8^@d>.&,O@duz_kU N+_͚0$l7 nieI>J~Ғ_$bDSIN+3M݄їiSrKDhدF|\h@ Д;ڱ!ᵫ㌭vpQc VI[k*"9'%UzoofKmtwz{# qٮg x#T{-@ֹ}P8f]:^+HtҞ06Cz ~..~狣K,g{8f6릭vepgl#  2&p{%=11p{%zgZQLr^h[Jzgc9\&JFu@}*0|ߣĞ…M2-$HRec,HEjEdW!xΘjr2KFAIZR ߡYYklin^40g-8q]ú4h}='n|j[*s9ؾz<ҔѨ=Ȁ' @)BZڀK!e%T{ӏ.x ~ 7w׳Γ?ϗ ZXChMnRrM׸\8p<~dmx{Y|û w~u([! (NDPi[5x'yKi-bpr˗XsV)Ⱏ,&l1>؆1B7\ޜQ~?yE%y- !׽Ր6k"m}e=U۸>6 h#^>SIޢEGNNE[Ia/L$wx\fSFxC;[BzwgEuzI=uzI"dby%^xyICS}zAɢ _;y))OPʯ-Mt=7mbުw ޟ~Y+;WRӥV ?zVGO.dTJ:DT:tRz\;hEqPCOKr(WPr S*t>!>3?y}ʐ etp; 8}K125 I9s " EvODՍR7X&caȌjˣy4g㥭&GńyNG9M.*DF ˍʺ`Ghf6%(Pdn1k: YFܾVh=1ir&1iry V#\jU^ȟe 5ü0B1h7PMހ:Y F4BNAIo4) 6\|A}f3T`kS7,]%&o z\,Gf#onNY ά!IcftP>D"SbvhH( EY`(D\) 04 NjӤ/ X(X6FnZgH!`eD3+Ћ|vao~;Yw!ݼ]M 7e(M4+ G+%pyl:Of7Ϝ0@eBEޖ$J^;'+u"iƖ=q7spnO4W- *g3y #HU`cݲFSă j)ߧ nT\ݣˈFכkգ63%DJj% :>nH4dۅ# Ihom+ZG?u-_ V̮!ƉkƧcxIn:٨4On;p?4?LWԆtvQ^K:]RQ0ngʆ{s J0FvaKׄ߷mی’O>t|n^sOJ6B\7 N9Lj|ʁĩ)9.o[i݁\<9)7Ato{Cqѩvؔ{q@\>%oD0N m&UJ@-3Nb;<1)ESgvM%IY䋁%Fgш& h"uu !ϜExׇMtu|"F4F M&ݼU !ϜE{|lpk= yS 8̮i˝r' ph`JW/B6rpp6cwrp}=~{4a'J,m/> T4:VҜ+0ώϿzs6'͠H. lA@$[ -Ő8!I V:'l:I^Hb5 xh 4)?CFW HK8Z@%ɳxڅ/D"h;&l`e!,Eɱݘ\6|NWnKXn^% (_P3(~|@0sscԬsycMg,O|jR2]ɍ4Wӻ|qz5-N7ͽOmn'v=I<6 zʇQ9/[gJtJEtBbCs8\GG?ǷhzCD9S]/81(J(@QwcޜSs7 ֑] #ͱRIHL^L8c\RCn{!D0޴0L`?JXTVpbHu f,ʉE@ %b#P>q鍚-2$q%dId\T)ܪQ(^UpE.PXIcsR -ŭR3 p }Hi ~~1`161E,BJ1cTɪPPR0)IQJ;(,Ǯ%yvr\)YksrNX(Q0'%m09F+k70Plejվ*B RBWFBr@e.uE . YUr a*J*WH@?̊Tp߰aqbޤ훳g;;,緋R>%k힖tiBɓ7w-0|pk`d|uTdq;[MmQlLAkX>YܔJc''W_Z(,Jk!2'o ~:iojW?|רwn蕝|p2fiRΕ=9 T|cgLsW9kB?[V.܈4w3S*z fa8䙳O>6욶m1lso<7%/\w.xLhA]%Yԋ0'we+w)<;b"@0bo j&u}~fv"˻A̞eڬ8˟҈͍=vjc>z is՚Y-* @5MJS6:"'Fo, Ry)OOo:봘NwklsNF*ܺ'KOP*oԖ"y/9_ة2.[?,&jKiV cΩJ\UT%DB! dUpQE^T4*97B%r7[nf1i}Ww&n&@@՞Ƀpݟ8h<:)G `0:_!f\3%t.4(2'*$P9 ]q'L/Lga#mOg-]SHZk9<,WW olz=$Z#%+: gT^KF9r?&?b$C=e,-!Xf@Lᶨ':c`)x}1pqu?Д7 Gy:zg΢^<ȟX N@ÓԎއ:Iq81r50 zʄ5'z7H]fU#uVtP6Gb/kף6݁B!/rhꝻXxFo!lhX>O|܅ ] ,y a]KxCfh0u C9zT#+Uڡt$Jb:ktn#:I3G6(*1#BE܎t`"kO|LA<\PT=ȵ^􊂪/"FB\7IfO]-]sfӨu6QCu} Ǻ `iOa֧rX$M0)-arp5P?8Iek1lQL!"" +W+&]grFxΓDSBjX1TF`/Pd1B %2nL`4XL#sT̈<@|U5iݴM=~O?oiq^lJs"9OMbV+ɌuSV+t6({M<$08pNw!@%̺L]*.B2ekV79%R:_tsF!+O-hU!81P),i&1Bhy0u!VRB0#d1Ba2,J po,S|?9# c][2zUOVSplHzj$h?9# *>m%Xn3h/-#3B {S8;ц[Y.%vH~SӥT@/m:*jl|44^3G=c3B馜"k+Iĸ Cz:'6ӊ2Av;&шx"Ip%{PFemo>-l!+1ՙJeͷj{|_wߧcN%D"hiވ\|TNY ÔXTse3;5 DF*JQ+e +6 co20>jgAJ=k "Qye ,#H7B8Yem+mçC*2'tVm3Vy@Ra,W,2q)\p^Kƃ.>}dWd֔Q@UlpoNZPcN) 8;) Vz"ODKX 9Hzul9BTu]sufth{ \ Z"K%]l#L5alOEk+ԂCV¥QähyxcUg- FcJ_ANwtJMxڲJ_gV!!p-ViNڭvkR"W]*׏&O/.|xg{Ӈܷ[|F~i#*b>A41  R1zD]G{W_>_cRWk AP<ݵ] m{Sx``uXCB"Z$S݊O5Uv!PZI^&ބjOj%]#/zY;ޘ V7&fn}%Wla^,DL* [. }(6iG4 :uȼXc0+B,ƨI]DPTuSnŠuS֫U] ˗ͥ\cQ'o?%liz\l=OW&`r:,;c/,GQ~.e/{{k!s} oL<}* Ood IR ^8B$`Z!d Q,H\!A pxĠ'=,/O+_+׻`WM71,h-C}}4!ɨhl*XʤC, _ifGbJslJSStFȕֵqXM5 F)閪 x;I$:̄q֠IgDMT-BFmp*L\J@eVq<Hv.3ٓD ZL]x^5("6~4l"e TMuGO^jT]1'(efC:JO[Vl ],da(ִ7^͵% ͱnj2!7"37mQ#n5VW[^cLXoq r/11&D ֓Y~ry;1bȥ Clºмj4+oRy oxn{4{zp.nFGUBV*঩[[0TRQ*ca_;PiXw ~TS3HQj0dy;޾.k"XV-ӬV-s ^N[g~uJղW !_?.9hL.\ǕT̕qEqyʫG3"iJ=eua/oձWWi|VVq{G/w"\cl #-踒Z=j̼0 00ܧWdIaƗ]4^Ն9njrSz0.\N񹺎ړij %~JNuI*.{%IN_<~Q}|c/OIԦ}lCcսHlkZu1>Bqe.Q#o-,mYS gd%gJ7s"?Yrc.^fTVA$^ o]@ Y#I ?MV^I7 R[8f0%؁220@j2j̑ VŨ"B9ȃA/;kdYQ@N~I!>IN_x1ʳtg)_I50ϏK'qi+e*!cFOS?V0f:Ru@! ڬw)ju@U%x[ |BPc*BhR@!˃CXe 9oP4+^ AGvX[l= FD88@8`#JztyĦqil7p~ډBW".GFu⊢|'.:E5=~ 0|86 89g+G 00 y~3y0OIs×x}|,iI]ʡM%H-= S*ZyTg=.ϵ0 h$yrؒR/ 6a 1: [<Wgr`S^ZI\+G%֊[vWor}~OOki_y!uGS%O)m0U)xB:4V;)|N)ǰmR]0R+)QY3!#L>8O@G9+! C RuDg F4=0}\?=7ߗ9k.U/aUN.12Q +oTF8iYAhrP=ApѠ|"jǗtQch;[rQêƨU@jIJzIhOMs'ΧŋO'X6>lVSǮhE!/C,wc̐V''7?D]tKQ >^X U$WתG8o]}SW/\W/^OW/i9[li&-fOӨ2G%㜈5@{SK ;sg30 r=~v=HZB|˯%=:񎅫3F{5%ޘLţ2̭qG <}9UP 9b@%~G v{۠" G79a~3c L7Qק&Cҟ9bH}/ǀ7۹{s5wOٚt}ȒʪDs92m'_Ο[OÞ b?gSLplؽ'WT[[죪Ӄsq4NQMPYOƶMqoUe:8;%M4{t{%%\vTl:o:ek]$}C .:Y*{^n#UҞH_'e.B7liՏGGe}Gw7PYlˏozhob & |TA&ěs!KeS!2EE2D%W"Pe9C2= EngL"cuQ0múT)N7kgD絆M;![h@LæҖSQ#BddY XSB۩C{mVS* ZAVȧޕq$BbS001x^lg6lбYlETd&+˶ T2"ȸ22zM$!|a48mR˔՞8lS!jX`@N 3r:Z(4X@55$#m"R$HˍbJBA(&|}AJCyAr Zc|V jv'"@1!Eo,p8q͸w^ Qc ԙ>QbP"nIR7th=L&ʆ@`n/Kh\kpTř$5 0 R畷ucp@萼s% \R`KQY<8P)#CL|a ?æ Vh%h IvA%XAGM M 9Q#4.dTsl 6\ȘTVZ 249Sgb9{".TJuޅVZ$6^+uJIkz`]lDIab*dاTˁTg`>_9n"l{}lX^eo2 .aCZaEc~P'xAW gˣNNj\/ݻ_&_F@8ܵz3^dnr[׿.ٹ+7ۘ ̀cJ# z )Q!@W;-~ Z?afGe*u}lyw%PbxqV``7u:vl D<#%២ojKF'$m|&^E~)g 7F$f".FcEI2F`m zK<ӆO/3pLqJFg>LrBP{N{n9/vxgD?1Ɣ\(ҮV ybisY' /:9U1{IYmsb*ع&Y7zl`GjPvAO,[$5b|Oapo'cTp5Rb̨x@Ͽ3h3S(gdkDŽucC*<%dݍg'|.A9əWa4E'Gv$ED7r I r0DTXoxQJq>SfuLzj}>GaIIL;^Kg'EkM{vHƔ:1D#2+)9#O;<6H5F I'_'Zr;Wbݤ&fjicbuYFO~6^ֻvkkW;?يB_·=|"@5lsUvCP{}OAv?<3M~yXMv2cƴ!?>|@;;5(nw@>>:G*{Isof/~u$O.%2Ű;nK<V:CDtS[;&chڭѡڭEH̙Rsة9VPs>N S'| I&Km''RQ``dc@#av,h& 'TVC\:C_vZ)cjƋ>ʳs hLIB'm<5ns0DtJ8E5-wiVPVCB"X&S?#" \!oxI~:ۨCP<W&= -j tȴ!phz~:v΃];a7xrsHnD@K߾xhUnȾQ^b݃ZLLd 7_Ź_۔wvGY?gDGwê Z,1F A%'H\̅~yD'Wo!(`kpɽw0v,orۊ#6+2hCj%j1?RTBD!A%FJCڀx>BX)pVa}?+^o@#zOT%y!EM)d(Y8֮\5}>Ӄ:@AԳAԔCh\4IɹOOFfē*ynB8#P䦱PtUokS qƠ*t. z Sݏ!$+۫O:%b@¹I8M:z"^ov+iun}]&žIL/-t뢉򕡪N XF7N(2"PJIeAcW[7ܘo`)C` vSAi.q/t -U[c .-e[[ktҏ@**"TX)DV Y E5߻\a~BOPD 3(U0S\8&`a(@0X %VFGTH*`d|RΩYb`OWm9:XeۂNGEa4WB<|1~X@qos^uwez^{gspRu[ո,>Sw8B|[ ux*Yuy¥q Wx!L#S<)nD[˸ۮW8UjGeuWIIT%EiXVIyQK `ĤPp T))mcenWSNJ/_F13.z⋹U6ӰChrZ x)ZUa|Y7ٮYΔha&0U3 jڟY14TvXژ29{Z#1$IYTdZcND_̰6y} 35=dƊ*`2v~@6iX-Zf NHjTdXNjn] | 夦9`EW.M 8Y %lB~Y+Ti)9Ms'{T[ >9FA8%'XNH- ZixwE^Zlp<Vi01Pڣ3iDLLgf`y-Dj.X㚀JYZιT]3_cYY0`EiTeZ+.jeEFОE@>jVfi _wl}a]evۙ-jg>X~:\h1Oѿ'.?(4_ S%Uuzk cQ:_6=o?|D?_.fE u&y۸y6_=?+f ,q'6#Ń$y9kXpq]θz7,KY0 '# !9$G{|lJ]F aՖՃI^I7hw%{;nفډ+<GReexj:({uru4\ .)qKnwmm'pNJ/r,HTޭ uҒ=!)EkٖIq4$1 pU_LYZ0Ho?כo_Ng-d C*; qURts/Ɣ)'gS@JXyVI65s6y +|4/t~#۔_ϯ΋\W3aڎx,3KVS2MTIYY0wdoRljj;A+@q\-S[:rFjԪ%ONwg{qL/A{wv"wSpX @@T?&+O,ٯG[wWr|%-Q-wWɟOOןϳg.ɕ\9s;'s9x{d|l:;` W;V)-3Փgj np׎}ܯ-S.]2gS qHWY*oqHtט"FzbW'|g`h xJeyێp7sXvmn ߘ||Nfpyt4vtg9<3En$D,)/H0L)GJ`1C"+=DFhtQ!8l= CڂRv8K$bژmi-}I8R\Q2Q)JaT(9Xs#KA!j5 ګ Ģ)g`2S4ũ&DQh4P&TG(l9k+sirULZU$ڱV/p[#nFb -J\UlǬN$"JI%k4nω֣véV<Cb)H]eB'zQNaZF#T]TD)M9GPvqلj/ YncYj֓iYɌ\kTWQT;F5/>T0jMN%B6lXqXQAQɁéִr"f:$iPTՐL"}`1ސG[/uTk4P]vNcLCt@Uۖj15%uFN("&*:f0#fh ֖Aލ[s[([EdZ 8[Q{p^l Z`Qh17: :Qg"SDtN!a2 !ֈKA [910+RMѥ$ÊHhE_ ̭P+Z(/֨G!5' RŕdW#9z$V!X(^UUUUN [;8aѭQ;E IBۧlh取Z0Us.~J橖HGPAT[+t@qbYUB p $g #x3¢[@0驅:BS?x,ю;zWOǓ< 'y:^H;-i~b&T^zu+Xg0/ƀ̝ _okZ䬵wm =٧XөUV!dC]91S{(>rqϝ\^'9s֬BL8UΡ]W>>\o/SrNC(OzX<-a*Mˀ:-Pcpn>V#vo C눬yFpAE3J,(Lkl ؊TdstoӰ{ V X:J{/׿ߐŨLy-grssvUEm u_TƼzG@wS؃6_l| GzU_lf v۬,dl=jč^xN5joa׬ߎhc 7dv׀q6"L3l;j@6]q{{{ݎ6VjÕv˫R 8,%l/*l`X6#WeCplOpTkk>g ;<ܰuu7eT} L^ބ^XR|۳׸)}˅M^в'g켾p&† "/~ G'xtA/9<~p12%)Q+j!GGY%_ pL56:ѺvZta3Eph/VjNS;qٱ,0r7bI죌jG!T{1Z#,%Үmf2hBj!HéV< :LF Z0)\k#v;D۹BT#m P`*6ʨS͠}9eZ9#Pde`T*z\Pu 5sr-6*0YMXUnRketU%p+P5U3umOi :{_ze)Ȧv{٠؏SQ^6% +?:<4_>M5Z pgf lyg{82"#{}lY2ŔN#0e!|y{,gaU,|'-ht˧As2|M-6^b#O;UEja1bP|޳Z>lQe;BmՋ5fdZp/xSedN9S yOOo$SO2p-->>B\ٜbGpypd8?yCSOriڑN>2^B lU:ՠ*X PU 0e 7@<ȱJ^}q,:=BQJRؓ "oj+ wS(x`ф#uO#|-{vna!'TPVMvSԽU1ʨ?17g0ǂ.E=S>EGL;D,"A % Jil IhLiݱ=z_5u_۳wemI 41:pHa[~҆H{Հ@  zDٝuYoOٲoT##36BJ)жo/@|gGZ=S9rkQG<2RI+t=ب4rʠ F|SZmiŅ^Ӟ/@[BZϞ/@OtF4QHjRqlcZcצaOoa1Q9Z4[RQ Hj޽9olvT6+0tkvhY%l6TjcZ8P͎] %^/n0zR\NF'(|޿''9J~"|S=joqw=zOhG|2&|vgɉaDKXuɉMoc_ 7%lBxoZ5 ~R=>zg{P;RmAty rs_}6:Sm*#5<6X 9L$1ji|}9m6yB'xd|ղB^?o pO?뻋@.lڃW71,/-'iϘRto,f9y? kazF^ x =+'=|o6o0eN?n֗%6mwW[nK &C{vS9n=yPhvEH["R|]Ta-!ifȖ:k"[rMdEa_1Sؼ}vLVcM7ok)g>Pzq/$!!^IA}V@ջ:MeRJx2!8bEZ+_*)E-Dn"٧EnXLJRp&Olr55s {b9Eֺ.-78j{ch쑘?xj9QR3;`~:YtޞԇYQ~.}[KM>d|RNW_?ilw9|93"ֈ|dY"o[?WcvҞKqKG)vBI"3ƹrdBL(jW%UkwjQa– kx(]Ghj 3sDXQ|ɻt=գE%o'< Q Qy^X'} ZGϣq/dKAk )BZFVy4Uw-cLw̯J\3%%S%|yNJqO$eCEqYhA5pwUt=?le*@"U  4yczPQS*@xeI%&a"&Yy)IKHJ"պj=$4ϡZh+A$Oe[Y"HshZtSIMAǵ:Ω[u)$rSrE[ѷܩ*-z"JT+)xxD;Oʁҕ3Q A>q^q+] XI֗4ptY<*A.D?V"XiHѣu@yb$V3FXCŠBUԖ^>T!+=߈)BU)%4Ctg!Z md=k8d iA'}\[9t9%+{δ^A;9Hsat"PKֺ2$I/dhP^ha:wBVfx2J'P%GP)zyoWh~Cl!T1<[*RJ}ڡ'ű`Ia1dߣC5i;IN%R.f|DviI}Dk $0"a>_Y|]l)Q=ZZcf/Л hr;Vr/ce0zh1!v{8O&R(ёWU,2ŅJvTE&cMeњ)LKʼnW,Eߵ(r鵈֙TkDW!ЯK($dʡF爙1 k,t20+&T׵ ! #QVRʽ@)2T+!Fe(<@:GTKaTbJe @2T9CF)V`ՌjQ8Ju!JuhM|"&]DxAQ"Ri]NZ 0Y,Y-jVY-~56nJJP }a~Se!Хe3%=҃F4e(ur JSҚjv(E[Rua{Re(fMRuƋv&uYڇ*TF!=e`J-~ZcaT`JE-K{Re(Fsv(-,zLJ[PmB#??Ϛq?6*U>ϑq 1$@ȅR'_bdoʘ0DsLY?|fd|wu~鞱?O\OhX]am<#Au#o;bkxۇ'=/hޙwat(Lxx;pzD,cly?47/KX"F']dߟOnǟN*[/X lr QB_?5"P.ʃPEjP4 >Ol:zWۓk⊦yЃ{>o-Ы}(!G'/x.?j g__]1kO__Яp{KnN  34ROq299 Y˛QgWMn׿EOzS@VVUR`L+]'+X;n^ )=qyr&/6_TՆq+0ܞ q߯A,%_Rt,?Sʆs|"UX'y4:oVG}U*;qw&7D=C T &w$b8  ^ ~Ly_C :H/(RndX_ ;F,:O*jV2*#G!8ͥѪa}mY [7)|jw_כFCd#Gax hLA|䓋b>|Oއ$~ ~iCy;GSͮ:Q=wbrw/8~Q_Ͼ[ lqhX5झ>PsOܼP<7*yQ; NC@zH.=*d+L^jۊZ@nFzhGc)YHV 8V\X1-72QUa ؇\񅪢ڗyw(l8:1jtv MCߪ_}s ^~k8_r~.oui7+?78!2$w ʭ6CLNt .o3E~z 2g`ȍd\|{-䃡mN[UMg]zSEj!}Ɗb|]}ZnRQKLߝ sz&$r,WXd>Yx26o1W;ܽ"f9NnI 186&`D6l|p+>~0_]GZ4<'&J-zR\N:seȚEc-X\nR\$`oP_n9Y;W+rǞ2~NAH۞ 17Cb Q]Sɘc-MT5D@XZR $SOvZ( r-蕨*z4ǫ_wOo˽v6T!'T=:njTV/."qw3|AvZn@ߐ5y~w7~vq^==Y\z6{->_7Z5E>&zb~zJGhñ].HCm3;JE,0;˺S˭(I٢P6#j LlB}1j%lX7FW9~B@8hk;; m!vA0m7R \S>md"I˧ic`9g*&C byS CeVUeApM|bq_JLf{1P1D[ +Mgj㸑_⧻w 4xaEGKr;S5`,(!V{%9_Dj4~hO;UX^J"1tB(j:k`[HAuieJfpm`L ^ [+)Pjvp RTy!z`_=,QNݗu!@^ -2ΉTڢpn/Ha_,zFQT*虖2tQ[ oz,>Ԕ>TK텗RRRױab-zTRi[@$_0$^}qԵڙLl> \תZTМpPy_`B-6VRA[7 E^P!Ԏkv )8@V?Q$n,@|aZޭ[,J_k~p7U{s}\=.sb __^x ,|ڸŕ"]2 g#\*`qP_p%ܽkh̸觵!W>T[2r "]K5xwS"s1v,ߘc*h8X[46Ox*(P'-{g÷Ԍ0a+ 9v)YK2%vWT-22`uxbuEx3Y/Z iG3܊脝!͢ 'F*I2R+9L>{-5`ccWzS1LJYuy>/y>{:[Qߙnn[ulno2}jWG5sXIt^noC+~XN'TW pg,O6VޭJmß}( 9{TէL~3ڏeo S7;BhJ|*ruoxoS%nޭ q`6S8w[eD~#Ż``-?6һu7֘9v)mJ)O9\ljfxw gH z<϶7BZ0`\"D E Dklk1o˶# s ц ?dk{P^~!yn 2_BOjO`Ʀq7|2/׿ J|J \~>e~^ɇg.>.wW镏Z*6:-l󉫧?5{!#`h~[[%+ғo'd^3 49+ k kܜےurΈ*Բĝo>ۧi6mT-Xȵqq|4΀9|H 8'+KvW(<[#X'@Cbi FJB/'i*W5fi[XʤTa(KkT[^p rN,z^gxf}7z}s7\UL2wiDبlBEB-l9ZI!]֞F{&jv@9U*ѝ+R j6kֿid^uQ %qhyɔD%d! F(S-s΄ƦG۲sX57>%>vxo;FQ[w/o.@(C'E7NXR3Ȯ@.Y:ϓ4ˈ=M$!{{뾷>t;PgW=+ʅ)kBE4 ar9`~>9\F+ ܃nf`8(RS6$TP#h*dUBq45,MRa32'?J[?꟢%;E[a! DDUd6A]jܙԿ( M.JpA$^@a4 ig+QQMda9TH b!ı+TsI*D黣len(tQMEWjc *m*MUq (V -BRdV~" -r%$x->p㙭|lvGo !\$f5:j{$0NbH:  &ɎYzDَU" L,<2h.DȨÑZ\"K^_JP! i PH Fi5TiSTA)q-=}j&f!0% Vq ՚$%(E QZwєV~1:J72=IX4{",Jc[lAI6hpF^%Ơ$KTkCNi6`M!\hrmD(y]3r*0U8-9!Zj$@DG zE;[ 7Ԃ){-5g ruSnF} ƈD2"t7,L =uz׻F6-W1Fw1\5<}{HB8D0lwCJ'ruE$6wHq8fB8D71#esD$Yŝ:flVpŢ)s Ɂ6xoBjOu^=Bwd [9T?mjޅqƓOW\B~*a)I0̥{7}\=.]Ϩ˻K rAJI.W9-l_Y>KyLJkoyyk}d8{Q𞖛o3zw=G{Ui^YQXk\ ܖl+sFT%,q'_oo+/-s1i`W$` lπ|i5Y-VG!$%3AπV-7ֈ) I7PƐ&doؗmҎ  I)uP]vJn+)JB2,@PJd]Y߽gzGvှpu᪚&fH]1/z#-[܂2pl_m yzntpQ+>E=>׏ښ| )Y_~&O ?w: ?.F_i;h>8K?Tp4}7R4y,*Ҩl۱)y~bR*,Le"BPAPP54vRg+5ɍ>EP$PZjR~E^炽@Rߖ3JOTǡrM01&JCi-5F)cq(e,Tq6 &mғF)\9}ic2&Jy_KS=QR"ơ7RO(=9ʳ -’= l~s]9qMإ Gr~w[yk=As-ǹ?nV^>#lPFw$ @3]$n;%EFĮYD'F]Q(] A6׳k.ޟd L-f Ye3{} x=v?Bs#e-\RRRtQY(@Z\vƏZɂtSUvRцv*=K ϓWN;C$TQ3#)ZR$C%ӡmk);H"Rq1lw5~xhF2t$`9Λ ;U9#;{l<QzD=|_[j{@J7)5a<26n4@iչrpUNɔl$t5d K. dQ3|psB'0[}\6rtʭҵw]:(l=_͒>1U׶F3:VVPJ&=v;3֫wc¸Rx3u f]WXSZ;e;ob,ŪԒNWy*MR;1RE(K%{/6iYB\ Y(#o*"L<)(ƖLJHOW{l)fiJNFjZvdf:їƗj/Չ $ 4+}YUkؓY)iJӬt.%{ңRiҬTofV*MzgB{9n+Mg|nR'"ҴՓY,JVO V'cR:I;Qs/vVR#itVOq0 +M_HmWNVzV}|W_8T%ҹ|:=r+ENRlHQX)r6Rkͧ&&w58X,ŪFXZ;n?h)7Wu]:Oݽ2ۏ9wY5̼Z=\M*섾iݐލ};VN~X}`G^AkGHW/LTX$4Ob5qޝz wS$ r+ !Tl5C]*S ˷3a4 ۀQJ!+,LR*."e( ڊhEEe8#0̰m؅^3 ^3 t̅hLǐD\Neo, L mԧ&Sqnr鮞Z=\&f1)QLFFQ+R" HeUWqUҒ@A+YA R{6tkڶ0))-itv"ѻq"<#TPG=ʈP㭔Qla\2j`N g$"He`Xxx5%s-¿M|;k> ]o7Ϣ?|_fF: {`WS}_K^;㜅g|/w[_~l꣞Y68R^^[43z@PHv)ː,ϫq9<- mllkь7!\a_s{hb˕eW8o/>t GzƚT3[DC-Ss25XĐ[X͜!H YKMJquWYj vR@5v}uC36ߚo^~LH2F%|׷TT_PE뇫r }ǘ+8;_ ʡ;p|'ьSVu^<{߆ۚQr?7!_ˍk>.$-UΗ=Y;7N6EIּ [W RL;rb0x)ѻa!D;ٔGh3vUn3n[Mtw`NRғ;@.0zQŒ72Bhbߵ'wnk'5&>5&Vc/Օ|ku%3eX)tw\)tZ,F/B%΄"-[Z^ynp- ׈ПUucrC9e<o  {ACc5M isDy_!MLl$(eU57fjnfT I5ϵIKR_IͧGmƨT֊V!55k ZJJRMgȮq(9b B_E}2``4!b;EZ湙V`@-yO5|zy^yEҋUIiŨ=JoyI-h%ºJVu(_CNJi' .ؠk>v](Z,#gv26ZߊFƀKg$/T(ZL-oI-Xqk$4*Qߖc53E-V! p47FO})Vɸn#Y}W$o`4&2bUNJq+)!xTߣkC6t$R=ڰՠiL ڃ.Ӆ5J5HCrZZgKU'p@kPVK`F -W!zOѷk.z pE ]dyIEȼ "e1/+- 2peNtLI0D yf؋h=I0V6TU8(k"PI<SOee*VX[ZD5gYdC2!0t쾬wRYt U NJ(q\&$3IIć55X+^o/Ƥq7,Cp^kb|Xg3>FFMѰP/2LݚjȓJ5Q Ќ mq o?]m~[Zw;L[X wWľ۸r~y:]jkeXӋUZv D;rx:b8KD>,䝛h'B1Hnݻлu t#ǻ0::mۻuHn}X;7mfŚԪU鐱[G &ƎȊ:51wϴMzwn; bcz't8/+0ԉٻ h5Ϯ5E%J+1 2$D.Ϡ3`#lZ2D& +3wzȤ"=dB !c0QSKj$~|dP#>*gÄ__n$3V\yCf~bm)8tWB@=懾?3VZrS8/A%! Y?+xz琚Mte3|xUrɣYyPǣ63q)Z_8<^Ojro&O?IzmYsLiq]R@Cdc˯ }fY*QE >v͓mڤBf>_VlgE!^ ˪_]_"?.qۅ *!:J^bVIӀ*ٽ T"%?(Û't@kٽBSoE^Rjs0eVDӴi SLv2`[keLU=TT"1j$li?v06K ]-0 cc|^-0]Xԁ&5_0m]`\?TnO~)O'VAB鉇f"yEzr?%)fz@PzCRX`؂5AsE `Ry)RBE;]eIؽJ߻`h:J X#[TP8P{WUltڹzm|L+c ƫ59FՔd=0eEZAF9UBmec#rTATJe,}R5dј70;dF\@ s3X+VhTr T*HV"ʄ*A pa"IHE z(L;ؐlqՍT&Zˤq]2:nhjXU4׈]Ij{zJiPaB ,Eby4nl4q V$Z_ #=Ҹ#u}uQ;Ahr(Tg+hj~3ɔUi`%O{P*oѓ>[ c<B=ud=)pAHce*Xkh6ARNӣ E `IyH֑*A6Ͽ~JSLRӖwT::i&K3`sFf~Y uZz?:4Uȟ?W w ,ZY"Ǹ זPH-Z ~ .{/Wf`WOH9WP.)IzP: mƭ>t]Ws4e$^=+ ȁ(9ZYm\e0(LFD D\VV|> lvS %/9.:S+fO# Au6BPFuBVڙFa[lpE}dP*DvJhYJXx+41Z[iATUVr8r!mD=U,ZrAtpt&2/+ڷɹEK;vΑP@9,n@܏ϥYrj W־̐B1(ݹHJ}tdwSI"ˬ}\ v;=:{\A9Vj(鄲бV26?Naa0 tV$k"8 VΕjd[THU $gq#+ Ąƚβ{ oqcfNr \ )K._ UJ[fAӽm$ ;]Y1@%k)䤻bN۹ s+b*5q9ϳ;+T P~(7P kpU-r}WZU XZS`8t\ <]M/hT =}6SQί彤OaFffwN}(RNZE\ RA;)-+ ZhF`D ޑ,vV ! W.)Ϩ`m*q@3t]

W G!m{72iGB9DKrLD nO]c.w؛ΨS.Q(86d f5-5᎝)Y +cw|E%S(d5or΀dFTS2vdd؎̿)#j3snG cȎbm Hk̗wSޅ];OV9lOV_onO24+ּGi}ܯifW`zoԦdfI'pg<^/"∹?= w=do9}Y_=q?} pzH5Fn}#5E.,Y@a Ή>QzLEYjVwQzz(gD^7B/=.,5pfp3`glU%~CVOCd^O^.YOeF9DMa&A0I! )}(Rg{Rzk,6 f|{R8 ,M)=t<=&d0oJ#s}'M #{*=Vy0J6#@s9d;7Jr 돁#m2?Ul|J$/1R,vA0_iә6LtL4G3BN%K 4.PE* `ӿO;y[pqkiZM8탖uV/1kQjJ + mYs+Զy$LX|3I:6;}LC_'mJ?N?)#VNaP [!,j.$e>Ukt j4kzoԖZ U=mQVWUS˾'$ů_ֿi^r7)j[j7e{DtPXP;mjxރɉ 8FǪLj紏c=A 7R #JOq"$gBA7fRs"'R}FD05zQI1R,{]KAPI-@(=mtC)fAJ31.=mRRuA@)n(ͤfD(=muԛ(1uZ@dQ1>us!黁.$]$ qߋȒBE:5ZP3- *l?:Eih(Ihd#m"mYON, ,;,utF> 2Ar&@$M[oӾGf@-xL2c|R< +ճ2|1"C>|fr{?y!PͭċoN<'nIX0.h8i4wޚbmsYsf2iu4OS桠n GQUhEݯb&Qa 6nՆ[?Fq?-'b+=Am_չٶlMLL_ݹOTբ+iQQjhB8PqPO'eRQQezp+)2~w)J`,':_Nγ[=l9JOޣܩ-݇'.n{TYlVp5[֛@(0 Mu"P31I,1  Ie\K*J:>/dRG^f)6d)rDSE) Fb"DDd\j謊ftXVх.ZeNiH:gjq.` -Es5l7Bԃ[h;}!Qh mwa9oM441!19hB RSɨk%UZPB :#0TS&^>쵝mसDz U50qz ܩIHtY UsD)(4HA KMRt*`I(tϚyС1MP \AG`"Hja`B(MaȵJ5;::sTV2eTPX?Q_8\P{YK(gΡRXdXxX"&B$T BWD>:4Jd Y._t:.3S?.k:u6}_Qa6%V'S#A}F]_jnɣzF ʡ:D 5ɽjK,pm Յ,v=YEBwQEvk>;&ٜ#mN7F)&Vz9M"st}{7W'Cדx97U+փM֏eA^@>LַKKr{Lw7gGf~|9YL5gQLI59zǻ\*ޭXv [y”$8w'8>ws.v/WqnM C)Gv6F"m3n{]p@9PL̉V>SJ+H-P:ٔ[;N)) HBʕmr L+ހ 0Ř$H->QHkD2Jۛޕ-7r#_aΌH,n͍~DG-(lIAQ^(RR(*ѫ*V%NJlwȤ% P\v5^;neX%+ 3-5J]43" $RK&D11E_]SB0ˁ{jvH~lWs.].u~if7[;\~:Z^ycG(sbnlfE\*j~,fv}E96̖s/_¸jɦ^֭PI|Ry'/Yg) U현wG;-0S4Sb=NtCRb1N3Bۀ/'-'^zF6))FGM>[$^c3n"P -حrns?NQ2KN[VM[z>sp_SeRybγ%/|nY%W4 D_h{#7[;E!Ҽ ;.#97e cK$A9{FX JyC#AJyM36:LJBpj_Ͽ*W\s8)(sdLӌg'TU}LvVdDcejd*f %d$IDpQ$E!I[`*4A>٣SAJ^DǃƔH#D&T&|,RRp1TK"%"!@͍<`"Zо_nzHpΙϩ@+8Wmb8!JTR͕k> (ڏ{'}"@p%nJ%eLIQ&6QzѽX^M9/]6y6)l-|t6A\nNs|N Bc[ARUkuڗ `Q;!@j RM¶ך!Լ!XX Ang OGO[-E3WĚ[Tu~TܱZqGW3#G+pG}#WN(N$;M^` ->#$ 8:>N[:xF6)ũʖXGMT7nP'!mY@`+hcN19+CDDb7r5&0M-%RdÀ/C>2AueG2{~3Z*CQUIqebGeQ ·*{,I?w B%BMߝQS%NQ 0WpE7" Os܉KhYs/-eUMQ)<*]]4dNK;*]=KuY+R;'{>d@Nm(A{ ,pE4 _~"bf̷m>LD?gۻFw $W"+jR@raT&`91 * d ?[!}SmVfbr\4\.]筏$JMf&V2 WMnY j+8zֻhB LӂjJU!yh!!InJIIa@`!5_ٻ7YrwwRr&qIe}(+\Ƹ %͛8I8)L QID,z̓cf=٪dPȺ8age INvާm$ՍGH+ǽ ?U|-o+߮SЙTycFms"2UB b q {ܾI9Do-{L! sH]Z6@91mGq# 񌣨uW09v=q XK+Zx(5+^ ȧu/NQ)H)= nSj ͥeM=@\J"mi,3Ϋ*`1#>l@43hRoaDQdR Q9%b 3j !ȎCd#k[e TF3@6&K*)',I$q?%1M y0-RTP$eR$"I0ܚ؎׫Q#?L4^k6W]Y,"W)MY+>n~W3<̈́9CAW+c :Z"-R (,ƀ"4K 5y츚{bL TJ<r~8AP]AhmC ΙWrJ$U$14Z  SaDIF,TsCe1j5rxydgCv懴GZ\=Fگ|yJ(3Տd~6b}wKSꇫ.G3F8':>ƒj㎏="Cş=osM$Pt~|.D*t~zn Jn6Hw}m~ Wp!s?\K7=D?tޛ teP"P9usE kᜟk'XWۊZV\R:)wѯB0:5ܘzv)٫rDX}c5S6㴳X Im|j0'ٟ_9zC. j]jDYա*hz.V-pG@. ӝA>faVR"v,vW2ćYyک K׸]gW=}\|u@ϾW^RP^?"tt^ (pdBGѠ`duQb"#*F -dtHwuLi9BB^'S@+寂֎a5GF: hE knД4LtGs@n$L;-GnܬX|mVfnyҏu˥m~p!3ףfw:q]f/rݯ\[Y=},Xam1g+X[tnߘ#S߭-h_~O>[սkGni޹YU\ޤi!1bL'ٍnB;`t:nTznMѭ rFq G;M~ub݆E(nѭ r6`6ʑmP.ׯ}n@[ۂSZ|"{- JsjāCA%>؅XWoRk#ڻI\Gs;Y~)v™{n-/ޞ5YY\o6~7gh<=E#84G|EIKp,jIL|,EHJ\IGRP\[N)f)<+{QHrj-OEnR*XJe)y(Xz˺Ti,R K 9t,̏,+ C4j4A[i^^=#ZpDjL, ¶Liys3n.JT?Ȍ}O+W~[b0d6 )ڕvor"/5a{EF/ pR_SզPZtÁj%S,W.L(d/W5kI:Wb.{W$hOK6fLj>o7{@t|XS˶i]6*-)N0M!9{ !`-m" |t$ZOIDyaF9a& Gܐ? :!s8 a!\4amͤ`cD1`()px n]#Q[j($VyaMF7ZB #&ˆ ATBdajtuhFI :dS3lck=2 @_h,Fsۍt"z!㊽H 0OZļ`e58Zø(R N:=˹D[D#J`"Ʊ8f/ (pf4t&:+6>tAҠ$O_e㋠ %iʒ='E;b(4Owχga +Iٮ)2"~j8OMLC)l dm)FBDaSR%ʫa>4~̡4/SgRbtz,dO\J\QqЬ$ҒOut[g]3ɀ9fa<<䍻hOE:SD7E<[!IS6Ku|jV|-5!oE|Jٯ8wMJ1HqwD)w xʯ'F><䍻hS=Ibk[PE$ic$)<(E Ȑ z梏~T~j3q\֏۲jU@5XH3 &^z*#6I[cDtWa$"j0IEEP1QQbҫC 7XO{p:Wpr>&-q5|)D]ƪJh*CXCrNn=IJ̚A)4{nhpΦc!>ߧ_J2ЁwX,XlA@ -FvY5$)e`j$SʲLXc\LI`d DP\"REPulU\7Ja2,ww v&a%F/F=DwTs=Q/$CQ)8ul,|e,a-`àD}>tf" Rp YeE {|aowI&:  @=G3ykj9 9x=!,1gxk{ӈSb%fмM'2aN6Nq)&Osܤ %{y^gjIHM<: F7ȝ+L֖z4 $3LRnn)ETmM?>t6t0YӃRI$45}y-5}}p҂-qFpA5yHcM_[GB i+C*`G>Jk$"5;VPѦCX1*:vDq%ptռN>3VpϿ>k{rI 㷕6W·E;\E;\ijM0;`^qô uG67Y>^3桦g?|4{mmܲmEU᙭v.ӺS]!w_*ћ"DKYaOʺa밧hq1uxDPaB ZPFiRmq!aO5"ޜ{? 0TDpgaS:w`ԉ00uq1,s;G %&Ha XK-wT'T3BKv`a)"Q0UP+V`P˃2 ZnX5$S|1WڰD1^+E9CZ^f@@W6X͜ ȁk5ZخRk4R]k,$LrwaaJݮ9#$<~NV۵֒wdqag\JvkkXC"28ڭۤ@ԞC%ބ͓PA+8YU @1*Ut]eM -1]'|\`0BtM aJkAEEZS'H hysZ#n:E"Η 9Q8ĩ+(h*B87 L]7`" !i:u]+J+JUR̯i]YaƨН.'S!و}BBz |ݛ,fYvw>G~1zgQFUT+AA6]k ,/w*|]!FVY sݥ0D;, )tq&0݆i;6*1ٔ%^:a->zO[O, $a,yC=!?tlsP|)K<䍻O%vbbV[picR>a-8$]l[6{j㨮.}cWM|ga,WL \N\oCkq[Qzqf J<4a?ٸ WX<%Ez.':6H4_/+%wP˟,b'SbYsafx2fqTehQbΥJwGJ&zڎ8Nz5It;uzRdh t嫰$Rf$@PɷQ= bUe2OE%Y4(X(æ1/vogߵ_[=.o]G\ύiUI;ݴ\}eHEnS]~\"(,R18S%[=qEŸ`URf#k't{&-{ Mِˆic%lԷi7`9ӵFbxL[~;DdO6Bpζ؟(#5[=EͦzU}*g?9MLF |Hח1I.)`J$ƏJ(D6}zEg_Ğ7Lc-0 ި׵;4l `i-Fj5,}wD B鯚ŷ"`R;@ WJ0r%x*!RhxCUZPUP7fY6-l2Cv%+A{_!ߪ,z Ur"xUY[Z-=#w#{wRZ/ax= BɌcO_CuX]-U$"-$26/ǫ>L\[}O <XK8t-㊆prg?6b#&Brjb*96zly4?(_z"Rmjy8_ET@ U_=r^,r>?Gr/dL5k笨tmkT8cNuO^ ׫Bٲ&"(=U<tԹNJ7Rk]ӠlM $]Mm Cf-yփ(00L@f 4L2a2Ǔ':|&a'Ç&RMR?p~d'^i{/*PIP.Mo[4?ZmUo*fu-v[{Î5͒6u-kO Rym C6>Ʀ68jtG/$Z}ZY]mpumfȇnu7(]slBlmrL&jrL6wpa P 2yܐki3HPrn-pꩭC9,D:I-`]#*:4(iFŴ x!Smjo1(&j˃f؈~Nk5v&ȜFMOk~:qqW'OtLHvr'bE|@xrTYkrry8<'rY\ (헔Eɶ&o9RDz(mqEb%}nVGV!8 ߝotz>yO~|.]}D;hn.I^6v& $臍fbcg֎i{UptӭMR}x}"%)(*"*e"2?:2VYnIC?eO>T;lƮETC*饔IZhy _TS`Xl\?$v a8,-Eꈸ|5!eӄʣC[0޸WdaQO]FRh'El 9Vxh>&{2s?W+Õa.gy}gY lKL("v>q 7S?j RUr J |X SNjirk=Ƿ+YôTs5g`ȔQ4#:tϑwiRZI;:;t+jC$Sɾ ILrxg,MN窴Q'7X+ [31aQo(̹ j{a4;mgS}+A zRiVZNjG-Ԥj* '̍=Cx΄Ry;9Bdj`]61PKpeAfʂT0y6 `g:&egRS퉋ܴIhwpKNTLY0IwXMU+K#A";˒>CG˷vz[{WLLTwQr=n1JM(,,ϻ.ꂫdv6K+L*8JV=]SߚqSHs`|d 7++&z[w:GSzRRbGܖ)1^Z1Ghr*IdG8JLr)F:Ֆ[93N4MjLR9!/Ċ9!-iV aRB~ӬZKF+&R<RVtW˃Ԍb4Z頭4x.3sZ!/q(%x r[jKkJy)0xr?,?]˗~Ǿ.ﰑuDmIY֏j[+fr-8Êլm=褔S\ depz.[5l%av&Ktѧ.jwi-+j9 13l0S (R++@Yq+4_1ƒrWߙS$y%ED8[ܢSjw *AZ*?>CU9 4bӎJ@"]+L!蕎.K]CŊ7J8kqBڈ~wH쀘"̄g Ӆe08X6%#s+V3+jqNͅF*½a{Jy&Zbj^7k6;Y?jHK<+<cs{kK b@U0)`!Mp^ZG1qrF<X\=P%bL2^""ؗ~_V %˚!(>GO0s.B:@KNA(I6KH0ľ fsW|&|zsxpV^&?"bV׳0s0O_ܼ+Q\Pq1֋@sj.ih?k/t"\Hy1[V׎"v{l 4Onݝ6ϥӲz:Wp܋쉴޵Ԕ6 gVş]bR؅c(o.o?8}~[JԁXo_,Щ@{V|)8c _>u* %Pgg'FdXNc) z~ SSyܪ,.׬K1ES}XIxO^l+貟Myf>^zU{E' V`J-¾c6@S,0m(p8pkd'ʥ2T?-G-譗03]z9RS]^^'֬&i$}&O>i#&`&,zs- fj%8T\:GoNөoe]?<֚b?sms׈t6ǫMWP.ujC<_ڙtR{t%ҴJR>>]jEe ۨsڙJj'`mIX30ᯄ4L6"MOiaizT6=xx$ő!/f5ȗ&:ZPꅕRfI9Z頭4Hx.Z)UVzHml:Z鐭s\sQP|VzXmcװH)|)CR_nKM JmDY)qzaDYi!uGa[)K'ԗRSVwY)I,\EfD$䭥@iV+vꅕbfTq]b*']弲- ?3fEK=~<,dTm\I)k,\j$wZHtΥʋZJ!J9 ;TB3R҇js&PNDM{1t|!:Jeڙu ۘ?|L|ӷOiw7<>fO_.>zҕ0̗=i)̸e)sfm]5ƹrD3L#GMse?ބX;\ $b-C(i7g_ٗFh)B^Z% {7n- i_Q! 7%ʠ0QVd4ǐ) Ӝ;,ʃ>}S%QҔ5Pҏ~HwIZa+%PPS87UD;hU s)yF1^3G/йq^eD[S={&a&ZO 9K^AH 䙲󖘀U¿R\2cr.Ww,2tFѤeF~k`LdMv7ݮUym݂2s"idbL!ʍGZ[9RS;KDcؿFnRh}ɻ|BAL7 I]La 'FcR݇nP%cX=I$y^e_FcH*z*:};IMHdmlvo~sm;ǖgچvCRv@-~"N;ʝ']3-?ζ=NeEk@n%'mE-Y!zH `8~ F1nu>CH[<=#qHD^ɶ]3%jv}Q̉lq1/LЧ?jsv#/gדg WWNW?ˉџtVMn[N'TɓҷiN|9žlXIY}1 ,\z xl1JLN$e@ uL/3XzRq(qBSľz}BMjMWK#KZ%D\\5=ǝ E}ƝZ:QMWw}HURi5[bא+[^L:"zmP.}DO [-)#m}l^#,FfU#P}ђb0B]a (MB@(m"R[y-Eaԅd9f ´ =e&Mq0th8L2D '><tItҖ] \G éQJJY0DyVsVlD?ùٷC2 0Q]ah3\Ioxy@# J*IQ Ws L0bJG6'@ )Qe֘y6pk)c8zϪr^L/, j 'Q]e]0ڻRE H٫ jCh<}mT(O%=I}A~.P}` ^:$QINN^.n۵'xfZs RIgҀ9RǽJ FC¿pL#9-' ) OʋrP\1:8gXR͝:Bxو~ L[X?oJM¥4.nqPvtˎbU,;*o8sK(Xʸ 6DBJ#.m9kNa!yCu_T桺/?٦)"X7+TKps|kEm1Ԋ]^b) ҔP5 k(@01D9+ @5;FU; [ Ct ٭t&Պ0.JH_ANA7W pEt..S\0CCImeS`cFC _)z֪4V\=[ecG9{* ]U> ufy.VQV ..yhr u Yb.dy3&mIS6<㠘"yM<*>l'O8,p;(\t">(>HC>Lrs?(V(y+̎̆[4 h! )Zh:'G'H5TWl86h!vOwDD+hJwG.C+7hV2WPW, ː7zH_u JfW\|.6J 5v}n[b_^V+PT 6xb`O4:a'w@\{E=ϴPBn>|] ["Of6myQ~X IWʽڭY Ωڋ?RE&HtPƚTxo j=n(ap;LIԉhsIJ6hM\2nSb1*B4XbteC+Mg ءɨzvrmO2'㳓6Hv`S7AK(X]0] tbKJHyI8V2E@~uC"hpH#8Г]( [,rl< y^Ol^{G<胫5 F}v(OV/?><4qd#{={oяWyu{vs=3?~|{2JwLttvҪ;"M;v6 z<\NLWQIu7LG1|7ʉ3p^|eDs7l V1Ol$ϰzȖst-7 +4qYJ \xmPV0 wXlRryV A0 &ϰN3>|I>XԲgXMcލZՆ>@,]{{)Bn,XfԆdcjJoC1.а)z DPzGXo ւZn73%)IidA AɧVhBa%E\a7V-1qY)3W:Xʦt^g/2l-_eYn("(}vlh#[QD&5ꥧY^74'MuEi5^c+O;ۗehI!>j 6O82͡j4*rNLSǣBsOL@t9']%JPɌxVoaWrug =U)v_l*SL!Dm<5(4*A0) `WZa( f.ɇ5ZҀ0'd3Ml"E"턙1{H A!Vy`Ź Kla F)idhp`1O?F?ez4C?M.ѷtl"t]]6f *'MO{yWv(9y;O1Ee\fj#>9{>Q?l̹8ѱGQ&ԑ]vpZŹ;윦8u< ajwـhF_"nڀhkX܄V =:r\DK<7%+,idb\:'[IqQ9t늤nбFN?t y^>^|T6t堗R$ӊ`bL`e$ BGnJ*\""y歶Ʃ%y:P (2KTn"sax?_L 5{zИ |yym'ףgɣ R}Ivrw맣GǸl7ob3sNzLǓ7߾ڃ]3zon ޘӆ9[{0xHƧ8irӓv)íZ7 {A6䩑;n_Ժ!!G.K2Uxǻ l=={k;JܪzbV/4sϿK̠۹e)1@Yp 9(,hh%D}9Ղzz߂+TƆr6;ލkځhzqze48Y6NO¬7xR'Oa| "rjΎ?g&!Hܤ4Wd[L^I5anr}^ETKW=* bŃ$* jHpN|8}5󲗒n/p-V txsh~ϋ& GmNn>[3뢅gMVɎV;սm=+RIFɠnN2S!ݝ\ta1Dna)$)YO.Lip|.ym_.¡)\?u{8@SkL"' =^vffsϦ51# &BvoȨz:V.'1!&?xp)%yhgm/rQî.+6\e{`v<~HW.yY;iśIW(詯 c|hM<9n6xz2zDЂ!d;„x{Fyeca袮=x,Ϣv^N GryJFmwzUZ1{[Z8\ )߯P-0b3/[5pssYTs60'AHӗ˦):߯ЩM&=\0b=}N\h-]}ܞifv.ۯw3mMUi{5 (VԵ\Jwu\-h^ӯ.}*O;U0Z)AM1t:{Fzfcձqmw;j8"^mUrlNa;*zUܡT_k 5BMAqBiDԽZ\7rC SםDŅ*^}JhiEQ[d@0yJ1i r 4{ b=눚 c63)a.a]oCobHEM@ "s' 9+-D=W"NsX)5r'luwA "GٻmdWr6[K NzO6/QKeFMI;H6)?q t7O&C#e \w - i9ȭ!a?MLIQn0";n=d ȎРKFndH$^T=V7P0 Gb .1 !ЂTG;k_oYuq5vuT(w>5p ʫ{Wv1Z*(Q$$^Kz,)m:K"1?̰|uZcvI/'G\2`+Ǟ qPdRSA XAzBj1!+8),' kιN;'ܵVy;(柩5Uh`YLR`UNr`%vYhiV;Lv՗rƂJ)]ro6q YDIƟ1x92)}N.9e UUc|[r_i-K:sc>0)?A* sw#Bl?s" "BDB3 DscE\@6:!h` ƪDil!GP8_7RFO\8(#44achG7׶;8i!8>:O>nqW\?@ Z>=! 5\fKn 9cT憃(\c{D2(΁RSɁ$Jɹ|C0>1][NߏOl;B'NaO'NN!Z+SIqzӼoijl&RpǔHiX^t;OZ\6)i*®Ւ *GV[?9L2i4ΉJZQ) R|WeUmx{h< CU ۨВNJ}&s-#&%Ƅ*6jsj5DCwQӦ k[aK(% ~!\(0Y͡D3*FBp LЯ"UL#e"I[m`k}]!隷{{^ tTMؗGq@8`~{b# ]eK\M_VчuN5Ncq巏M5OM[ K|./.p˷xW ZPAzwK PNmTD>},C7UPd &tc}Y}q$\0~(!W:<>]qrQ,}ůŌl֏81_]vD.C'SXqd{sw[f87o?j+emjʝ ۺCzVAW-ccίoxOSl|8h3b $m?O}ʟy{<]}֙ 4HdSN:qܒq#!~^1ĮtޡL6uC0ٗ/jn|T?y؆Fv]7Iwٸp&'n1a&']mr6$! Crꪐ#1VL2zp\cJLX$ةʕzW[?VOm#e$"Xyu.F+jqеF'V[VwԬrڶ^ܻ JOKZUuV3݃9`jSzz[Xݲd /"8x[`m`l]X5N9 XAC_qZQF҇Ll d0ҏ7oWJEUXZ$ެ 1m6ltOwrYKȞ_aS%3:) H VdgN"-.! ds}VF.-w0t+Rn& .tln}Xj N>cgצVzp?.׏דx& 773?G~4i^$P,O˰Fϗ>3"W}ϢW_-u~~Ver% y&bSCgT bL'uf!,ð-Ecg6,䅛hs\R_;"i1A,SL6iWNzk#6s~)i ԰֥ O Qz~s&O,:uA9 %_>'8"`p* #9-@)DHPլ4J(7*s`h7*~kO$wĴszkb5:ѻQw)i^A\ +ºS:z";[;SWu5NY S [6k)ߍ@+O>1deOkkzL;\&ʲҋ~p#7M:0]VL3/"3%Ł]yⳚ_\Oe Һ/ՅAC–s,E%6J:{ #I),CO)X[ŇY ~:v0pJ&>Ň+36uK]xZ cj&!ڶVxtBOwas@\ϑX(R ?5,zsޅϧ7..۟nder-zðo?hu.8^O̹  oOܗ{B[ֿd(*&Sw]UϳVM+lFL}CycwG>g*G a3'8p}S~0wxW~~Mm WعaOQJy' w#D*Hy  ,Z͍UqYmW  Ց]@x ,5,"Jydx_OCtWd(Q#S'1Z4D>S!E9b## _' QZKM-[W6&8"1!dc;d&X<|cQ8v4`ʽ8M1!pH+U:8Ls)(?|mv{\BAda&! 8ͺDTR1nJ}FvWЩGebmX 7_C2ïׯelLTVVC@]?[/.v~ewK[V, {S Ow~bT2ThkiQm`z6L=wfi>14F77<Qov@?(PsmB^6)9twmҎ0'Cõz,dgxػx dryN;8>%)s)xlF+kfk!#K_=\^ C;>{Xze-aV M|2-re09Ԗs ׵vJKoifwGYBx듄91q(9~N(uX|$ (ݣtٵH4p[1;ql䴽QJOs`pNڇ[빕TGӺa+E;2[:AGx'J/U).ߤ쐄ŢFwN8هtΣNhfx3p$|W+΁N.!c.{r٫^\V|z:3!9sSBڐH 40 VQHO|>(w͍F#xWe'Ra̮l؛T5v&a!+&1֡2W%RT`~/və:ԟciZ=8I2?Q0^(B6Lٔ< *g͡Ŏ32w9-[}V8L}|N t^ÑleTio-V.Yt/4݋0M"L"@Vz@ d A૜W!_QU脮CP}**W 2HkV.>nⷸ:5&ڬQnw0U:z 3c"xT7NԂ̆>m˜c]VkrP  ǜAH)ּEj|Js1ƊBSWR`7P!A>C34Sr4Qqsb 5  - *C c/%KF ٝ#v=loh48_5R{oXKtBtuV3൉5C["F1L(9  up"!"%W {k9'4[A̾JKZO)FČm5uaɏG'sS?9MJ7+w%07,aنR5LXKVnkbu W2D 5JĠkZ1jtU.X{1֞T 5ahǛ{*yʶl1e:myP5x S J3E! 2]u?_.. I#/Ur1X== /Ǧ.,%j?}l&棿^}$٧3d ߝS/V4UqAC˷'.(3 9\şVPkf{ f(&+|RvGTJؔ}΀@)lF(Z!fk3ϒ̂a悬<8%j\ Dyd~d2yβ҂sd[W%&!n2`{L+B|u6#jҴn ~5VB* 4 ɀ }1XiZ+O!d\P2lw-gg)zJW^e[Pf6٧jdLbw~"?fAbB´0v3.y;9#r#79ÙǓN8lͧMLmW[kNΒV 0㠼/D,Cm83$f8=7 ߮ ʡ}ct׋A0}XZR49l:sn$B,Otc 1&&`=?`imd0Zm9i9|&9oЃ&){$ErLy%YаndO %M:OH7k8\*|L\KL\W/\ίo.;C 썺Z*Qۻ쉚1ia?y F'$a`0:tzҊȘCSn/ڗ'V4ky3Պf-}4k_^S_"Z-Z8\>=w1e'x9o13'xnt-n 0SDn \Wpd9aAid` HEU{ gsueQ C MaQIˤV_ÖJjqcj*icQs#e`zZEqwoXaOߗߒڶD(lt>Nd#?I{a>K|g;$MA N])y =} i|0࿿_:tNKSS67~q2CҩkO]4W'"']o0_3ysA32(S?V*hY7nE6ԛ hgѻbc:ݎ}%j9w+nCX7nE67v1nu[)9S.툫ҩ/v+ynCX7n6e2m*qs CqN ;;u^(ˤ>O7WϭW[;&ӣ7ŠpVu.PGWI=)/Z!EJi/SJI'v7pl<)Śc*e.(ec >̶D{}0*<)ߦ#y0;;ET"N-T,ѣ|ntǜ7u"fN6K9h^Zڴvsn:?_+KE{5KZ%M7uX'Þ8lN1~ط.c92kosu'hH/"fά4H9/;h4/i^&ӼlfyGO(}X}mu@ xšBs4I["Ek=/}Qѽ/_Ȥs6a8rƹ-DmKXjJ{j ZV肵yx +p PLZ{^Ob4]!*1"|*[0f>̤K` bԧ LFQ&UA+U*b1l"8F1ˎԊ;ߩu^%;(8EuW`Cƙ b8Mtɛ U k0O}F {ݼ`%( X^'0CYu^͢E h#yuejηx)6Ra./XIƙU 33.f 2ޝ7'AdZ L׶bWLv'Ʌq}2,13FC]9 r<d) -}1̕L)٦8=w1y_R_!yxZ<y_Y+QN7jpՍ#8Z{tP=a߀`(mk2QPsQP'hU,:'yPkӘnJ5`81}G [Zu(V}o5cISw=jyAXDwyZIRp_j|%2'ASAF;ѫR٢,;4LyI3|&ZdS-S8 jR rL]/z[+:8w+OnCX7n%6řm~E+9T[zn)c 3i$e.#&1p}r,..N֊9 / .=VJ9 O x iϰC#"GC+[* xyAd Ϗ[:w_i\>^!h AOJݪ0BХv(Rn%c!ˌ}M3 n`j!y.gsuF ;fQ1HK`8BP31p&N\\&NN ;3qT&Kyz";)3r `ѱZ\hm1%gdQ0Ԓ dN֧YvOz.#][l#m_<0“$A{}aufjqWu*VkO>ZO|E~;p7<\y>,Zq!87ښwKnu?D$?(Jim8m=n9̈cji}.QTe@ NZeTOi)MQBU-1?\/Y1T < >ແy3t>fѼyd6x3&4d.v?K([c?U7L6tfUA"lrdgₑG>瓫q?*lYSkeu I-5C(!pj%4S4:k[ˆaZjT9 } `Z!:I7>#X8*CR;tR M 0uCQX7{C٠㚉9 )&J*MZQK;A\b0_EOU\d>) ,:b/N/ j5rv# v"ƴ/xyjdi+$W.\T;X >,2<<Ɵg)qy$3<=c-qmy垅FՆWz*.+Q0V5{+Nk?/@VqE3EgaCE/ۥX6H#ŗ/{c R*K3 ,kCo۟ro] =hۈd#N6m#ڝZ-SxC%Dk囨DkR 2f*:6 ZN_[vafW]m^qrl u0 ȁQΡ2ьU\Z88Y_ 鼏 L [ 2O5VLwk}q- R$)mqڲ }jh0ͦB m/%7 nTbTw]:2#UԛuDl-)#_Bdt~ءfx$UTGML'e=H#T; B˻D$ [ d_o#o2 - "q 33Oc DX:R:"uMlF4XWh n5dֆ7ҕ@Oͷio R~-K|)$6Zb=Pԁ`0"kRrxe&C0o(UY Gt79Ch:u5h%腅<2 Y gKܕUܱ<.wFr\r\Q{0Op߆wmk_h^:G{ "Fp:σp 90i?ݑ0) q8N|y8A< K2zNM\qw#Hh}j¨ԄN-8q\k/鵽TlF?. GvZl/udDkXy{#3x[.b{ f',i.Xs W(B|UȻN8UD{#9/? r/[X#/ȷ~AqIq"ۤI/V'U|I#m^"aAG65{q(`R:J5z,ן]#fH#?/FjS٧hu {gF#>jdCxwpZxgڐp%5$dt:i4$z#cڎ09?/q)atn|nS2,SlTk5u;[:ޝsbĜ^wP,Jx[ey)kufpe<ߨè.(V@V;ӆ#J(O}8j"iMO|ǖ(jY!%Zn*8v+ g~v;h=v+[X!%ZuǹݔhŐJ1Y:ݎyPbmZӐڭ\ /у(< ^cp*OQ6ԉ:ݢ\[trݾEiw"p]2j0՞ZU*`+5 <0ܒDPN_[ ̋<=JɈ7p<,Kr[qkrU>E|QNG -WcSt>]xө9ϗ~:5g=xo_8-wphfvmŶĶ|oiNjYcV˧Gv2YKTcP^8w1F ` Cz,sl{b;dOIbgy(yS}ڸgj5*=23i-Ӫ Y@*ݬWPY-]1O`$xî e2OZtQKL$^mB<8ÝQ:&AH5&_Pp`َEWv)p 4LJp(1 F&mњ`Enrƌzyeݲ7Ul}Eׇut{p#8`WJ-bi{KNj%G늷L*R-B!}>D&;p>la'AyugaHZ3zIQSvcs Z󈏮9]V$ynF7E%WzϵI{c$1oHz^D{++To!ԉX ypsXm<BsKkkqPk|a9eGp sF(Q%9(PT`I2LoҊ ql ҫFnۧQx]ImLE l#_KV_Y.o:?/*{L_z2IǛTBYj1K&xy멀.K] M# +I9A'g{2Y\O}yx?>U:~G2?v2Zi%~tDrўfg䙋ͷ@#CxpXǼ F1hf 8Z#34o#aI[55j"n4tI;!!3DR[VTv0 egIyr E#Pӏ..a]pko*.d+ǜJ:ϭ`7KƩh@}֨U(qZW !izs>v$1[ut?fn:qyQUhbWXIx`TGXiM XQԁ&:gZ:B]y4g }c w-=\,5Vx5CxǺlڌܕf|mqa.v7E!px٪ 0[4%|4#ۉ\WmBjm1hw ᮺR.kx=}^+Y\Flqz)jǬ%`3*}0dCk<ӦzC3F,8{g?#6CR*ms1 g[0`Yv*ȗ֞ TH&J c"ƕAJT,ę`KZsIBW EVA ~*hp+ɍ7CY ^+% =AREi0NHʢiRrЦ^m*(h1 JQ?ͣ.`aԏٵKAO`h9>71GkrD.6 .%+BnT5qr58!kLz\k4ܴ>vk4xFcB+:{}~#:]۾< )?lj *cي_"jy|zHk&)|2€e8&x'|ã2C}WH0F c/~t2r\xJIs#sIH1FK:iyhF K3%[0uz;ؤ^??mjtևߎZ0oQV_ɘe G#^(|N> RQXtWEf1z/˚^;N~Y7w8}'vl'Ik #yObR;# b3 $*@L V*aQ65c& z^ՋzQ7W/V=RFEԆ(RYIp@{Bd+q2qS@c}X8Pz,|me,ϋfDwȲ]ѩ#;$wa]7uusX׭uU<*͑f$B7Bz nT$-iTn\3nvHtn9[+A,QF:[b3iܢPJ"zE Ri)T$-"-1ѥODW$MR7zux<!YQ Go9IEebCa|"eFh`yr "iH+;s(Z?J-]bsbt:[ u C$)ш,_,)*9ci94V$kh:U:zqBV& P lÇVѡ%G 14i*5GC/NFgޛ4=4bl:d TYEPbTme xoK)mk Sf#[B=e b%FҌB`Hx 3b+uEc)]18R@=!Td&O9"M N9I]4E㒲*2=%5{X$RD&4DF<9IE:D6ȭpqw.3!" $ׄf"FN,Hrܴ㇉pJ|u}7`()C7vV Dž7TKxKH$zk3USK)a=sw :r x)yeqM1$"ҙkp~F`0N)- Q5f"AZbR 0ka$T%?l]w8rr>ɍ98ruiMc1WJW4-R3iላV|gQS`@r{LVK )% m I:!k._#X֊kTƃ&E'> 1(0l ңOӵU {qM=7Ƀ!i0RjX=^r Ѣ 1}AE 1D9Tc&#a p4Z1dchU>bh n-YNJL0ؾT.8S!(K?*8.): h-0y!Z/V أ)v3]5yޞIJ;vt޾_F{5rgȅm\>z䎊E%N{6Fn FmK'W *oN(8D3Ðk/*u׃~gܸĵVߢ">N沗Ɲ+góO=49nB,n5܎?qx|svzJhKϴ2toV٫ߴ_q)G?}kty Ⱥr^JKÙ4ds~޾ {5{|\!>@V@s~dp1"< CDg L-ɮ<H )ݒ|)H=:9ulV3~I T+ E熂ȲbA("lR֐*,*0Z%.CtmGNHIIt`W+h/EkS5M']6-f%or=5t7ygZГ7r9zۧ[@d[[e8H`lSX%hm[[5JmZ-ZB,ͷ9JK,lK׼>\W J^7i\Aud4)ףL1t4Β1*<3+xꌴO#&%^dӪMje2BV5Έji 3}]km(z.B4Rw֗r `ʩDXޣ G|vx1xͧ t7O9XLww'y:qzuf*w~WxeG8Zd.rpnwfc̛\|5$e;ƧK4x\}&I ļar9 Bfmʵqa 0Kv|oV@ÞQxo9l^.  Nnb FVk,=^A/^[}Z{uUu-l>l||%L#}ͼGs|TozߩjE۵R T!ѡڟ/RԤR `6uw𥏞N3s%yH 7JҀ~r}۫0h?shdet>{Eћ>DEQsk]_s ipca_l:!!3':>KԄ(ɦyk*9(AN=d켼6g޴Û [A}Ozt6i~Zl3h\p<7C&S@'LF[-ٓn+n*+Ќ-ʼ;ɘw1#)uLPd#Qo2Ac|}r9 _z#$3`w[QHeω%K9s%;}^|Ez S`7Ω!DW<]=\%c+a}܍wwIdߟV3K"c4.@Sm9ayRh@5)@s)n*soyBg dC*Ppt m>z  U9$R׵Sc? 5D0kq+G/A82oE !A&;eW(-yN0=>+Ec>UXd}|{FϿ˿ů+g&̾^?̮iq$Jj?=Z A˫[߾rϮ/[7GYG$AMň$nN%M g I{۹|]7 dKrʾR/o&dm0]hJG%>@*tC\+Lu=Z{)):^N4%w( I)MVaxf1SVmdF9JTAHȈCCs 6qXGtJ&\ =BVj 3r.in.hP'\#\!r*)I+fZ R4++/b<)A[XTvx\qWd?6 Y_ypQ(&{c_J7-({h^@B-lj!%A\ۯo4Hh9cɄ-EBA9=ЂɔQ >€ 6hbtLd-w嶛XsѶ.F"{{:ߦ*l.H@*t́O`RX?X#ͨbLjtZe^,@ʮ4o@pD0UZH&߲ENbz݂Ig$[jFd>gntܷσpZUHU@}e*SX.dܭL^͓7*EsM(*4, !0oESu $9Ct=tC$7^v+(d?6ʼYP!pߖ̐%*]Skkȍ`#Yڊ}boP(y7Nr'C]j=@MH}ey/^Ֆŧ%_måD 1:"nQXLs 7q\ic.&zHpZ P|o o< oG'i#׆22H%6fhMxyȀ.vW ^DKt:FB4 :b~E~> e@|RxK|$dd:FonåV}p eH.%X*ssڊeȪ6Ԩ#49_Zҗvas%ה7}xbUM,-q~:o~Ŏ{)xw_s0KԥvotgkooWE%B.'>f5U.v{:Dؔj[nju ޭ-%M񲖤S`ޭ@K[M ^*KްRD>}2O>aۻ/«G2G:f vr 9XE]jW#[=bS;݅\uxcոШbxF>l#T sKATH FWIR.$}`\O'wDR)*Y,߬ S\rRH`]|3~`D @ :%bw|5'lqɴ:T+J; }߳sEOw)ȗ@~PDFYD 񉌞2q9n\DFFfreaR,_r9:oy/D'&AW" TT"xpSLxzpen+r"!ZsOBH'[m 7Ina!`Z!hHF4 !rq22 =| 0:ogWWf1ϮPHHo~*Cs'>h,GyVO;6vV$uvC4})r]!;ZluGaGjpK4߂R0܈kD$HPA5#ri'`vK~M:ݤCà R;xcҘ6J+A|GкdUjzti0ƚiX$-u2CAN#8NDc,x[@b}ͷM 쵥@99CUΧ2W%Sb>?T_dtМ+9~Xڳ* j:'KyrQJD+똭 [L녰~z%_0m|dod' `b8|yZve~V:1N{ʨR5ToG␊?H5jnzy/^<u%AEHȺC$1Ph:C/ͯu. #,Y^h694ӨJKݷF .-./TZNOK/'i#(k A]]4ȱ44D;͆zG f! P8TpK+@ӵ>zOZ4;wqXcQJ@a[\FTvq4-NfB th~AmJG0qw݂efYp%o'L5?]*w 5E}c!tfc4!4Q~M 5dL٤>$:tJAo}սm@4 eXqB0ԫDR\}svG6|c᨞<^+P9hh7^Q̡ish-!,wױNz0twQǺxz陫B)YwIW1"R"MLE ٖUqI)4FTS_$q9@q|+tM %/}ZURP=̯\+)͍<2?--2EjUА\ }ݘyѩՈu8"XRЋ{.h_)ץPyqxqkw~.3RKsKNx_b%}7Zט  J/6ϹT7ݥH vDTDkѾq)/q;~g5쫾p)lS ~?qҮx}#"ũlA,ı[ Ղ=e3SC-h=&Der֔Ir&3!ӄ 3f'̸>aܷKj  12s))$|adTR`.2'QLcܷm?Ԋ6aPZ:=^nƀ7J8WǹR?ΕqR_7cwpuɀF'=j;)sPlm?ʌ~|>T X+5JI/!#Ơ5 `dL $zAS1 rB(BB$BQ4B"q@NZoh[g1VLOD <IɴȅWHRR(M@ @Ƴ3z\]>ʣ : ۷A+* k~l3zjl HIԓXI$e|眍 )RٌD]pH252rYaee/G[JNyog1$UIQoxSc wՌK}J6e}jE7iu.1] +߮?foL-Gߢc0]7j&WGfϿI&A>Os8ϊ!&_ߵxU?͇)fk]?P ۥ(P oyR ŭ -oxRؿۤfP͊.P8)xzqsaFkjMɜ/&948Lq@cDia;z\>sSs$j7ǚHɊ PGB-i(` $7ՙ mG 㓋8@W3oVy^]gJț#\6XZxMΈr>.0G3{~Y5T块KGp8k@[pPo>l߶O̾V 3Yy6#b jlOnqݛ K/I8q`;=;~E/[L,#A3#KŧJUEXP |S9넴1ܻj0n힐noI3+}S)&t\<eH~S/nSu+ TUhB֎T;BI[4ᲚwPop=ЀVFc@6Սh ABkOkOtC~-{MF?a3J]Ttýw={p_u՚#K( GsT H%TќK+QN5DB"RN<(AQ5G?2 !/qOH(*mb`Hq.+mRe&a`C0 D+r$IJ(z1 暽@7Sqmw)w Q*u}m(ƺ 5J6qخvٿ?|_^ 6_.O^& O~˿-*}8:,{,qtXã_66%ix(?;Gl51_xU/!О9_lr/165[{g*QP#Bjw;1B 4 tޔcR~OieǬ˷Ic 퍲2qy%~8 A3ڤm8kT+C)3h8gɊh8\HNՈ084w*:<{sZbbs@uoGDžINn }QT&nH1tRŬ˅{&*eYC+<{r-&zgWZir)ʩ$*X #%g#~0fXr]ȴF_K˘pY6?Qho_X7CS\C ;e= FYzI "3gsʤ1qP&YH*m%PyOc;61RH91`h9gn~,6D c% g2\K0XvF06ROE\aN M28ې 'Y{cu.h`<߳Ė&R}3$}i>.>}Ʀs;u#pEox.N(y|5c{j{L\u1;@]xVZOW|p*gCM4ʦOnF[,!:q*H-zޭ y&zMIC5iT5Wyc|2on* R+@u8 <א11:l8Oݛ*] k0VM;ݧ6m׶ 讕^6}yvvšK1je_C U%FGUsY:yk[ШpVoO^Yv]%Q7>צgW5VT^rB#7:)k"{؋|h\\r1c4(ofㅏ~c2دPѾ| n#(9yUWv?!^q+v_Zt}|IЅ]Ϸw9֝ HԢVC,V!ʆmY+;D||5z˧_9=E84*ڧYE"HEqzN'5fO[CF`V*imu1GPy_9VLgtYGMp=qM.j[{܈ Uk CKbfx3O7 縉M[i vP9B eOh"CLiHizX}ĘvlKl l}i9׬w>L=DZu uNl-gAAwp_O[Sʭo.\7˫;Wa1\QMߥܹs }܊V\S7!!3"+zRW. ۗs*!XT!j{J0}=T9L .D8_\m=mڱqx}(A./{rk lGCdddCdݰ<}qd91І5DڭN5rԩhBuoxrT'9BAbqQɬbTeTJ)aD)CKx5ܳL=zQ:'- H0&&&Iq L&5Ly>@g4љҚ#:1 Yk`aF(3De^]Pqq*B/b^M;_)&mquwǒBe*ht$ 3kj _QB%i.<ua*B·p:$gC`*Z i%30/f߆ek#`1k @TD&2-ٔn126q8ہBGVmM91 2^%6m +]M CW!]dbڽ33 <ǐsbL{0Ljmz=Y(NOqK8ڠbsŲ=EXZX{ͣ_kNeݵ:߳*\h}V 2- 6IpwZ&7^088T%MgŲ3k +?X/lr]dg+9m|$1hbY l-Ww@4 C}%~FT*D%&\*si%BĨ\3N$$ e#Xi tSDSd]=|ϗQVX%ujs+s 3H( R| ꕁVbH"a=ZR*VJg3 D+r$IJdM1 pޤYԚJME_-nWjQ]4éWYQ%zqs6Sw7% {Cz,! x!7opU6?4ř7S*=WaXRD\ZOiJ?EErCM4Ʀ8ѽ$ x/[,!:ف2iۻE4Ի5a!oDclsϻISw tbQǻ]r!Mm=Pքqݱ)hSbd>67T߹]ڻ !ۚ 29Js::}oM^s ZfjDF6}9]}rt_恴.\PB+Ar1 >{6eTi`i-_5xnXU֕+~}nUW /Ǜa|exp{KG? ,a̎`=(ۓ0Fp/s%n8͇~yo-{qX oyǛVBj6)xѦY A7K=800y)On2Y j 820>Bf<M_~s#Ɵ| |ew^~N h=-fpeN,(%uYS<Ip\d:e' d"K^6#u4RJþj-ϐI2g謭#pfemG!xdK!¼[n.J12Zp[l2 nW, I6F&F D2t<=5W_&bT¢9:¾J*vQJif*®ːyvFpi+-zG* h(\?"-`L ̇Ay|pۃܣuOkqHEK TR}gbs`c}8ڊG?$%Q%5 #Hfu}UuLz.n%፞w=MǒMq2|)kr&>;Ǵգn0FM Lq.Y~vMM`rDh{?Nxqy(,P:3vN@}<-TsXc&|=^Sɛ={^7 S.MN})?xerסgd*zfz^z)gŊ6ŭp Jx*FePƨhڲ}h ۮꝱ$|y@k ۢ@LN$2D|ߘXO;97MUq`F}> M0xXԤ>QɌ(;Js \#W0ds!%n SԽ,'<̵t R:ϐf#2}y'AIL}h|S;2Xmcuuհ}%۩$ȋACx ]DB?2ЕBziYZ6y`5xm) S6Vܾ8@aHCW @ |J>VxI†`llU6fA4q4HdKF=E%C ǁ_սX@;j .tm*&c%{v᣻2'U *l2Ӵu'=B<|1>۴H7ayj>o^MƝ¥N1Q3:3U0?JG_?}L>i@$;!Zhv+)nQX_ 6CLSIMⲀ\\o LBfs9!ь%m ? jY"7`pޏ&3( pGb[yvͺE0劵i yZ|cO,/9B~ 4ܰr41<)FH+'<)6AvѸL\k>Hr#IT#]ZDH\[ 3rdZh&=M?|$ӬƟϖ<Ge#ǜQ j\AfY2xw}X3elF8QoszrNɎ\Ofemӓwv711}2WBj8OصuW?ͦ|X&8^HLݙΛ lD&}wܚ2V*?N9ڝk"u:V*c . . צA_)ֿvWmZs" sHXr,\pz6}f R6~lM (D?Lv^2VvMvrr+Z&7WKaZI}HobfSZ׬vMdN霭T*1Ww9^O/P2R.mr\+%9et!0Gc S⮋k\`I{(yE%,;01#B4e﷮^;_3՞5jOI+n_ F|i蝭~y8:9qg3@ @#zy@|v#("p=ISDE "st/31onҸEӇ[MLҞ Әoޗ/"gT~Ւ+dXchXiqi'] E3/PH@LÀ, #.|s%sT!QaVU0Lπ]ZxgvWK)jeT~֣dU*~74&D/ I3f_<_27cX{|ޤz3b'3A&{w)mc;){\ EhAG:2c#E俳tǢA0)ZJnUi+%yo>h8\K b‚JRI]<4a)/*Ҡ"і4@6{|Kgb7AbxvK{džYfmIcEWQD8ḌxR,,9~q Lt ;v flD9 厹qv9Xz6\ N6;{ XarFxZ|mP6—a vH~Ѝ<$L> L q<r#ih\"#YQ5k̥WԜVB,!Q$IpIp<8߇J: %P|,SK!$}AI:)"\ɓr~BFC9a81FX[H$/-6[.)>c$m*%ozb܏ Hߓvv{ԉR.n*I\o ƿyOV.m!!SᬊX%f`wTQpr451qI7ڱöl@,X9-A< 嘩_Ƙ^hy]QFb1rdP+BbDzWrϡyY|\ y Wɫ?<~Zݑ"lw?< &>͗'0~ y~"W!ۧZ=kCXq[b2!J*L(j'kD[Wb2GK([b2=pJ3ͮ-PǼFYP,@$K0µPG̱2C0uߔcSe+F oҕ DvhUάo w5w {Aˀn.@c%_>ZRb[ƹ %kMd\]ILLi^=,JAJ{qVMg/+AUf|VLrW Z(ړk$}%(ѻ `EN T0FՄ3˶eFT ^ =0p|߇|>u<7>tF/Jܳ. Okd9LTuiwO.+dNcHbU-sT? *v[ !Z߬L|+l=jvs#,&f7[9u(AEc mv7U8 7.O.5)up!mlKu;n4.>8{^]HhiC2,K:ْ<U0 a2WG#;eJ _YKu¨?83e(WXŠK "fQ{RR a>.'/x.cĂ֣zv~HiGAdԻ!SN5UqVJ>]#.7_`%AD;t4˜5\?$9ɐG9'xی~>6~ppfEY$l7fBҶj@_Nmm.A ^||d# \D!r/h+΃q@H1"Ze.=KhP]Vhx L 0/YxH_WV8^,0ڗ9ڿ$Hjr]@E3aE oixIRϗ"Rű5IfJx"6>.{B%h/y n^MF%XlGoYeC"MF(@qǣ^Ox*n-(vH@l/}-2Ādn߈([ˉw _{$͊2^QQob(iuW ?Mem/k͐l>w_P$kɃ9ǖGpG ˺Ʌe \[n]lO5Pr<1VLpWZ0f8SvW5Tk?z/}4\*y`NJQN0;iЋTpjU⢨&Ύ>bц3 R-* (Xކuw37(] WCE2fw<` (a)6NGˊ[΋|&HcCycxb#GD/K b  PX1ꠘaҋ8Hے?6ĵސLGe~J%FM{;Os[  4'1d ,9Ÿfg25r+7O -P /=BM_'$K4B^Biyg* $$H~ g3o ?ݯ fX/*2,QwYݯS.eD&bƔJٛWhqu?ſBB.qi2< s!Toy%jL*ׯ[av߬! MG}$IDK|͆1b!2.+|0wҚ.JP}{D5cYǴSdoW+nӂʹv|?Y?>꟦pz3\Yq07S"ӅW͑y`K'QZGIFbGwQvm`;f3XtȽ~qsڐbN.>J ,@|z8;IGZqo;KY@:9SEVmI/`Xs(YGIg=S+~Ue߈}& wۇ_<}>[ @>i)DDpU* ~ lonx"uZ΍)B&~{@K __,_wRVw%2us8Gm(*w&R`"'zg(`#$5(Ax[Yd`5cH~M`CPtoGj aF# *3(_Tp43B0 BpCA RDϸ22Iƃ5J(dS@FR <,vgnF)PB\6+rҮ6x#%FȘ9 `eHčT#v)xևר[\Faٶץ"ᵓ nkrI,p°ˁE{ vqbF`)Ri@Q $@p3*. Z@WjE}|zGGQ,zJ jB,*CvF0A:`bg8A1 h >3;3dMQ@5X Jv +Aq%"%\ jb.X'ŮHri㮗jX P]$֒^di#@@׌#@)pNҜ czq!3|rE;d`nJhRw^c.*` ,gBYʛKt G ͌L*yإ.*&ۭ->JEbȎBmk>Ԉ0͔~Bq;"RlLC$`{yY>֛ST\bi|ȇ RTk!~GLR#C2mH#"~sL:w;{Z~<}xC+ NkŸU )?~򳙾ۗ_ٿy؛"Sߋh>JHQ{kN%8 a [Ո̴_'s " dV(x630aLF*QImjpT }'D2ٗW^onl鸺}%HBf[MEM)O Q3ʦk1*asZ^M̜BZi}цP-sKaػ/:%k͗̂оh1 BL-W-TV  QKR?O>5A﬘'rGgDwR~P9'3ȏ- g 5Aݬ r {I;9sɕܸ~S[V^;xQL!wPow:0]0].%v':lұǛ3;Py207dG%Vw"Íc#-3O)όy+-kήO'=i^6>KqgC%GtW˒'n'ɭ_x䏼 .hG]G=oX8:񻝾.~brt;pևkyM{MSxMz6}vYf\6i*>Snj%߾o$Ds3q4{$\Qhk5 Q[Bgޥ7nD~:~tySUY 5%Z4ݥWV"r2Y Q ' NЊ2/18 ԄԅR+ZNqߌLJtD&`iG-fd ov)$l2LA𖓉璉rް3&2"6EAjS>L73]h`U,!`՗ߓ?-n&%L݌6{Td+.4h?HB-+ |C%)] ]Z}&UR=Qe] L:c"SW?ȋ>{c"8+Sٷ`3-h"k@`Cr]RΈr;lc!ÇO*w{z(QWNUPm"DZ:\<#&N\:0V=W^-MؓJF`IT纂J%Ҷ%BHǂOr(%S_i|QFW}zL-&Jry)BjS6H G;glMFJ*j2R:x|uS>d=\d)B F0IGg(YQdFuP_tK#3 Ņ ik RI)J 'R]@EtEakJ䄞>A#`\cD '`Rޫ6SC1ZQeoOp\wr%:,x܀3 c_'bUUVؤZX]`W8OdR829ʆHqFqumz5,j4GP${\5]ˋTn}o j Џ'~KMnKƋSĵYxvGONREjŏ禇 &Zނd䥱4z)izCa"9I]¯qS7=xg!0b_pOd<(-U \gqD^mmCpx&8|ґ:6[?n#.<ٕ=r91THۯ_N<+eg_Vw#F %ަ48?TSAd̻$Tk 9)5W^Ce΀1WX!W1#L(eD3UfcNj{_9/=t (' ,Û.~Zf"~4˞1.5]ShO59>P:[WT\euدR#e5{?Bb%aBf,T 5>&m[?ʑqRGkE00ŽZD*:R)M2: ۀ<e@W vDlZ(zitVXxŅ ɼIȐ#)y;Ⱥ+;O [$g##m'tWW†H@e(% %HC {[=v+:3DTQF<^,3O2*c0܌%-e 6vHG/[};V>ǡUvbϦ}U[m[4B,R Xڥi_p0N1 !W3p8 >i6xv%S=gZKkZ,)u7UٔwA#5]B:kme%cDaɧk /:O=j9yvqO:+;ik oV {sW~wEyC- fJZEV~rcPR(1 ~&$K1RDm a+qI{dY-+3Ƴ !4:_.gGdgo?ݐȈH}%CFTc%2*眇 @w2*1C$dH3Ů3,A w/I ~fYhznW[`.~ l:3{mm؃ܐ\}q5_!WWt_k^ g%oӷטk,SwHK"RͬM!87k܄3sF\ Jh~Fc"(g'O6'S|,geLJEmvcgǮ|:Xv_.w[ e4?݌]xgel"pi FI}%5t|[RDUKP7;$&dgYɄq6Pb'Z/i>ؿB,;įj {L>~@h/?ڠc_21D{rgq)؛ {k;"mQ8H'9 rcX2jgʚ8_AemC|YއgZˆ9(]n"d)ͮ/ʪ4]S}H5]H^a9'c&Zd4<#|wr}QUA_/?o o|Z1L>EOe9k>e9hrP-`/^}u;VӰKO=w 20oiK. Q#o-׋2}W[yMLbo) $j?{.x)鼁 a& PLLNʀ-&Xn 50+M!'f't!pPX,n[  ްsH/y՛Hԑ, K \ 1f3\TiO닽}fnsW,NKUU1!B2Ad,xU’ B,yN[J R+c'U#і иEٺ![*ٮB fIf┠[ѽ Tz/zrxsfGY5fg i̟Y謁+(~o_HQE7/jLW_ٻgE=:-?8Vb E 8"E1ء{̬vHbΣD̀Wv Ћ9Kێ:գUBrWת^큯D"#P1%RLs^V2D L<rQDZQ}6pչ p,T.=Ԃ±1aqFdJi4}-+6+'=h9f 3aWHDPsEP,EaaR# +j^rxXޙ %C=ڼeE#Zh~}a)iEkmk|#kXQ+s(V Gi=XqZ,u\Caڟhe ǻtى7i*Æ>hզ,IJ:L|<5}aSY[u!>}b$=RėןKXLkz=!e5<Pos\ujU/9U+J\ٝ tHa: dgٍ;Eǵ, pk ?[,+"X"v_9Dh֑3g^94]װ>۷,B Kp-I쨂Z4qph @-REwF1#QoFſ+~[ZRT!iJJ}$xy\E7w*-JzW)&}>ԌnLEODKХRzKWI\e5|SM`+,ԋ5&Uc"ϿPDQ\)p9@J$ktr42eR)V[jY<ˀ oNJ[H@eiX$vW̄/On4? VL+K$@@KKڕӒ"c^o93aĮ\'/ e^2t|NˎL%HVشhkl""r&R&#uKxZ Ҋvuf3HJ؟}-!](dW؟JI ]ܤo^"O`pYe/tb+3+~p7+V.)8TF:NVbW^Ц # =`:ϻ?N%e'"{,xoD™pԔuo?9~PJG'&M*&J@= sz>/ :y $3 nVdM nm8K*R@ :Ln 1 +#SD%#Lo>Y@-PyIu|R 1[$1#eqR<.&&>%+$?K4t|W>ք4-rR꡻\vַy(aqBwz|mi=sV -u:[sԡğ-Vѳ_ĢVٻ+v6'U3JĶ,}`=4 ZEvꄠI{{."훍Gvtf!)Gq Enu|{?oZ2?MgffoX[3+}rv&Aߒ#V>TJ86{CaR+sJ% Uc*'i$?T-d}LkMj50U86E5CwMCJ ⠑Ѕo &P?)糍xq6nmlc fcF3)bJ"'t<0n(AN"FR0j== wwں rz7I&K*?h 6L?ڠ C=b [@ 5u;?oۘuӷn[u9R{,4+BD%F,ǹeq1r\"/r7=T C9M7^ߞݮvF6]uo%LQ)p`*rK$RӖYl/ 385R-w,}|I>dK{~0%svMßg>8aDBXZ-(AkaᵂAN\i ^c>i×kJ[RU,|W^4^h8fƦ{)WlrG-UVa4`so hR߇>\Iۢ%Ak6Y9%Vt/wJDO]""I-$Me ̑k!miJ%4qIK/h eT1,Hy O>PHp2HJY VŁI@ 9e;,6d'*,<)X]JHֈ;p9A4#$ҡM/b hi{`MpJ=BQᭀG A;IA#\&GrXN wmHvn_Rj.L%$g6~ RwP|X)WlIt~у[>xztj4NW3q6O>mxa$Gׯ^UϺvMQMrf躲צkj"~fQo=%8z;>/v| yl(fq>ʮpu6?-i&C7o`Z F><^ ">8{P !~SfB8 zs?.Li<뽀d$8E m)r%1YAﭽ R;nJ0hL +Bg0e< Ek<qGr>7SK%/m=K=)\ӭ[jzixqGBg_LqJ;n*~5<9$\N,k<'60'6hk{`8Vgs3G,_ٱTf!^uU^jZ4gmoN9"q^*6g-;QfڦQJ=m]<};};%{o3oãKr8d\aw5xիOq֞._5++uYN{1Z]:P@!vEuyVx0ljXoݗQ;xc޽6}غyfekL<;CxuwW">:d )hQ)BۡqmچS~A;@[52Z{9~٘'浠BMwNlXV}+N*âFN[OJ*G*G_!h!i]+sV(N1JS*V&zP\y⮟:=j?mXNwj?jXNwԐF?t"RRW6ӥ19,,r⢜?ϻ_;c%9o[fCH"Z Pma47/vY;Pg'nw^z~!/_>8AfѧLȿ *$Xv ?R U>W ܓ"bEߞt0uڑʃ[C}(!C1R)IWթJLT´&3 $xWyI`gՏW}N\w2[Yӽ'0hce>_oh`ިF.ks;q-l#SdTYJvkϖ*_JXoYӿX 9YT=έ']fk qMMM΅IXI13wA  '+ߕ/C%&o3Gxjo>.>2)Bu'ЅX|6L]FB6)W߳ 밐nl@x6\\ܓ{RHP\͑˚pY b6 wbMGWob=|LnΣm286aEˈ"+i5q^f5PB=}ňݩ!|I1>ͽ}`2} u[c<O_ݏ̫Eӯj>mW<'LutB&*(1{ɹШ' !Kv򭂃vVmr: H;ZXl43n4rX خ>DP=mZB'$lnXB1 wQӒ(?aAFQ--U i:s#(p5{<ђO$aL %ɯnbt(^%Ƅ2$AV'-IR ZhIZ+%)p Nym,k.e82թVfi(IR%ITI (̠_ Z*llmfݭ c5lJFy鮲 VADzRT**vQ4ᵓ\*8V1m`}M2_YÚ(>=FbIpGOP;]j]6ZhP}r2E)lH>񮪃U(SǻSQk5[{Z5[밐nl̥kuLUN6n-i!*ooqM3/?!7o?.ex3uG?< """A ~pOw>Q3@uS}<(+BY?=pplސX V<\ϙWR8ZI>ϓw7]qMD#]3!LXV!B;#)y- It-gt@9)粣!+] #☤ҒiNaօ#'HBcBI;LCP#3ŊDޚ!KO#~B!LCٔPQKd,|U Ap 7’CRQ}ՇT'`-v2I8"5F&P"1jRC8%[VA}(<j~}A>L,3=^ n+)NZr3-qM@WBl㌬ >򿯇-e>v<{gH ɌV&?.4Qgo-e$f)2tjEP,X8J![B!X:JP>LKxSD ,r\Jva- "9?8h oցW ^KʎWVRxT hUBE*'0UWmAlYyD{x; Ga -mR* RMl%L}|!~ & A"a",v D1$N-u9}q]V}`pJ1N:U F#4)DDI#5Ijk_3JRkioMK%>Rb@1FTiTsn[|(-rt`DŽJtJ"d64sk*3TvȾTnMrN2\Rtqya0kv] TFV挄2-nVeGAaRad: GYqqaIF&dӻ22fnߗW+6]l640⼊ZPNDiyo`$;ڜ{̆D.^yz1EhMyQ?jyqKA1Ji97XNA,`6k' nN{>Z$Yx uؾnK3>l)CB{YxM˼rzޖm >1(-G#O?‰CAYCQ H~":.@k1^c#sLr+ Oʳ,CYQy=sS=-|H/Qbc>bX>8˨r(u)栾-ޏluȠ>E,qvq`V^fΣ28"apYԉ}^P[j<.f_دXAfL%5?lJ (,y͒Wa鉄i"G?[e& h%IUYmbs7E5:j1`\q5n®PD:sMl> m*/W\= ~w{Mp/;QZbwZ*7՟>Q|̰b?Tt9ç+;=eoR".l6}V:«9 @k%=_QgX;B,HJH5Ԛocewg+6K\h6jJy1ܖ.J6SJorgKgΓк3:'ˏTfǞo<.EeKikt`w -YCko<~NG$ewެhƉOq|H6?}刊fI,<+Oyiyđy4'5j N3`Fyy wx2PϻAG>(筍5y-]`iQX;*08S:oGDs3s:=iѰ4{>#pZ?h[9*دggM>oo  my3z9LεX獘"])^B\ŧq K#ΗK(8{m%TB/X4*bZ*itF -O31 NYC!YG`~ pFXsX4sXѢ"9~Ӡ'xan?nv @( % ;Np5TH42+LAꤵ8,nCuV (wV~ pM<Pm_Kc@  e/ncQ+Fɱ̫G=68"f~f'9&,y\ ?%s{kr%_L0)R]<.RԞo|#c+ɛǛ8y.-ESaѳ uLgΑ}nn~#AJ'01p%3s8*8D,98 R &ﭗlE TRH6N+L<}GFZ*@Bھ!ng ;Ouxk2{خb!B`[Tt w%\#l%Y4XVVpRm0cGByͥԠ9e'm@S⨝Aw c# s_V'JVkCA5(m],]fi\u9~2y#>1$4522F)\γDQQ4Eaz@𕌢 SFQ1E׍rhO->&tQ0fHAkmygbJ8MԠoBt0*2V;=;.lzlvxyaXb̐髁|W*P .7Zf+SUԥ㎗h)aQ-{֍њ/,HTrck($/<)~H!c#Or|:_""ZbKOĖVj>޶_@V]^uך>kzh=Jc/ڳl![)WhrJsݩdόǞ(,m=zm0 m>!;Ql5]7z}X}7A38#aCX .I_}۷ 5 ڥ )1 XN1 qu4ȒjKZ{7ZRQxM@/jB?Mrr<|~+58圶/§$)}2W/nMSF&8!׬㱂h<+[p)5r uĖ\WmBH`ҁ!vx !1WLEj3Yk`&N* Lx6kL niC6Վ\EG|B*0LN/W ^a=%-H</kEjVUSэZHSRɴ`)^i.RhIrHqIQFNekO-3|*\HQ)909hHRN /OR^ɞs?ul{۶dH 6 G% ^kekD]zXŘ@8.8SC8ٻ~`#V }lˈ׋8J|PYG8*Km$qc۪+X3K%BJ*QzZbcYR>HiZ1BHFNI$ghjЫN 5SyV'S '3&(S(чDg 1Z9$b@~ǡ)FsS!$RLO`ӥYA0f-ɃS.fAȺqKs2EiWlZٓXԴw`[3zl,C]0k[@~o~?oJĤȽ&Ϗ헛y7 \64{} -џK`qʁ֙&e|fZ;)UzF>-YgBiȫ$)556i!Xeuc}!da&Ih-zBlKntP U'a{e *s۰ւhdoT Ϋ>BWŋ)OB_9$O7e{Q I&yX&=[˓;ĵ6_tJP[eoTq4ZcL{1eȥ`|HOܮi<9X=I*?^j􊨦l")zj)FOPy̓7ء껯)y=+֥i(i=qܫFW<@&yGV8Zɉ)o|GWӇ'ڞ*Io<7g\uK`Ppj+#,`g8_fViqgEhm!@9Ӏ If!elvu_t>^_L SSwK @&5x-G2iYe oE2&1u#:wS(Q1OLm`c]{;]Ba,0I^/#|10ne_\?|s{/rWڊ/޿=\4WO_eۛ*jK.+F,A˲fl\)M\V"ڨc.+@=?wȣn _u#8ZU,)5M =3"IGMA Z&Hk̫)1S4XrAUHaKjz: t,ǃeM7_-u-GJylVO(\%Duż?eOOP~qvܱ9YBYǞ| Q4ۖԓI=ABzM ͿDM2,4 ];w4-Yí S Kl7NJ:Q+{|Drmit{1#- za ;dϑ&{)CVp{7_XmSmm|\uڒaÝv ѵ[^N4pjo#2?xnt-n 0v2Jل{8:j9(8o2|n p[]Gn}wdj&T~Yj14~ ]#p5O@y#dD杵)<7Qn TNR A5T߹R+Jz:y)EC䥩 #=C\jd*%eR>8%rIBL3e cÂ%VLT}ɇ-E@9"k+ڧEˑչE>tha*|P9i?c[sO_oò[zIm~Oyd/?^ǖ*=Ywn]lN>Roy7uDv+hޭ{Bև|&ɦ,|ƹݴwAtFw;*BiϻuhwBsݰ)QhS"SSm玵e[ϯ>ag# f~Abr^$ ;<&Aƻ(32%R+%7.ryҹAjԖK R .E5X+RlgQfʰ\7jRBC"W|MR+h5l? b(RAP2;(a!Q*# Ŗ\s&DB3/2[`'&M5N'(^fzL+C ZYR .4$2/VDG 0I 뤤\KH(`dZ?օW~2#y9:5$:)SG R^=eY['54c1ݺ”I-%pm͞ v?#a܉?_"~6E]_^(Ξa&y?Y#m-e/ƒewKmF?/EKc7||*<;Tا*z][o;+,0e^Xdo,vWlj|89Yߢ,;,-w,gcHWU+ž80-,qϒXۊދ)Ucp<}4lۇ)5?-,v'^À]S3NȿDz$&ܮ"+[@ݒ{5@GuVҪa_kշv_nouxԼdUYL0IE/`~yb^h_VgֹmIT{LУY1ls6n=?e~Vl!rƏSG<~mqm(6[t бjhJ&7b }>" .ovj="ǞS9ɟN$1Ϝme=Αnys] -%{$J'G:p4&,X~bXL{]vٲ)X;0P9ߵ|R'0;[?É\.gav*bep)곇44h㬦 h`etjofӪcQTu jDjxe{rC>%m̌HD;p;VPX)EcEFS)٪=]!FfB/" I$GDGT!qcdXނmbBD!.Ʊʊֻu{c\d}qr챑wilRΡj,눉VukdljA td+nto(-O],x<{(tOz^p9c3vhKo"Cᛲ{InlqĹzÃuLyS׃ٔQ8B hkUnvy*ӗw X/{,D׬so\vOcf%)VW}jQe斩n2-S]Fz^-6:;@<]` "U.VW(=ڽgX{O!eW_wjc +nƋSX hLo]"ȖLJŎw<7.i1u}zco=\uǮ[ A韍矀8}-=ynhe_uK/mG.pUuVKsBT~ 1eVSC EZxԨv9*OU){Qv$XoqXro'jtk^ՋV9)zXMɗ$0)nφ1Wv\3:UӗR{܇=Ml{+G"޽TɞнD+Õ{_.?˽>>ʽ{pԓzrAĞ:ǵ n2TʽR޽EǬ^ۚxL^鸭 8z[ng9KCg\}i xHK(2o2JBeKOd%'39TN֟?~9%'=1zn-c l&Vv⡈oWA?*h]>CVD^g"C*XHœ0? 9nh;/ Ljm#Ѱyҫ~='/*$(jM",2uY P~;bh6f@**B֯l>_k%%b]/)E)8| I&A`R[Vſi*,jdmۅ[u'b95 n{N LXa9/b> (.6REJz9`91ڿ}kMv[ȕa4]IJ ( "joA*{oTB|n Ȁ 6/7dNg"vd0_0a;m٠ah)<"rRAaX^48ȄT.a7?@Ea|M"aX \m *.uhmƂt n.R@@X$ar`_%4,ћOֺ"x?U3cQa STiؗfv!*%\.FH'uO;'oձBilQ]<XuKex5zVNVOŒ)`Ms;2)g#QV-0I2 \9dajՕNߙ>WCrWSl%B(@fiQYǵ4?^^.~1_οOK]dCp󑍋.ڂ'9A?0;qUDeLg,)~@=N4 `}ѰlW]ήʏBBf,\b5$.6PC[:2fko8h0у[bEfgb1ؗl=kn 482YaJ0M,)l<NŽU_N5 @ULL`g$񙸠;vZĨ\\2Q[qjewX,׳,ZLW<®C_x^Y7nɹEoKʃ}NF:Bꍋ8*.5 @\hFH刼aGqP!h]wz*V*}l ϒU)\"p&ِ>)$-{ ,#QVTmƬe0uDƁ: [rT\j ¡5mErA)rڢtGk4dK j\Ӭ -)gj[cKH5G|,nd~OSƱI -D(w-%}0QΚAwJƜgꍟMVg83IA 0\%Zriʷ)RD4\)D%d1dq'%c f '=x ŅiBt;\;ᄯb+A#no¾:zrPg Ol/Fhh{ 91fc$5?󳟗τ qD*N4 -`op1Y9h]o/h68KO53V/zZ.BB`[:)ږY]!B51ğ|F˺?}<5xҵp T4"b#B6@Gx 6nnf1D"id_գ%S:H̜z鮪S^zM gvV&e|D d1yyhD_ &v.r4~=:&&d^5ӨIߗdY\2j(Qۡ _~m(ѱI,rx*",$1 Jk/]-X ,@udHrR|RA+zga2P=%|mBDμ&]$,h+bKi\'#DbA&S4Ѧ>mC_rw{i^#Pċ.'mC"UVD/Sr 9hx_n|*min x!r<>wM"3HJ&B}h$N=tx}m;W_k 6IK4f`Jԍt%JXql͊\hzp8 -v*E0X>o@!FC ?pJ0dIqw.)⹝ۇ珺z@Ǡ?4֢LVF*m!V]OAD_a~~"ۇrFM7.Rdy} b]ٮe!K)Xy|ջ@Կ8Dڵ1a걜7 רq"KO$D*qTo{ (H=*:}y:.:\)6; 7T_اeFiaۈ= =0"՝s0}aǺK9)Fچ!eȽ4_ooaT }C 1] /SC1 p~~[C$ {# STgP%XGǩZPJܮmfi*ݿm2*Jg'IP%O[ʰ^ğŌJ߼oyA{=ҳūGyLϖ嫃__.gGB!ő:$䏃A`ghMr\1rlZX2["y+*vQ}Ug:獩׋1ܶ'}JRE[8U6l#_]٘4q6~o(ޭ2u٨rpF޹dWru1 yiB)bb^,O=i}]>}൱`G%gYV_aDٍX,"T;Xx~-U|pm|6>2[# @Pkˎ01ϗ׉Y}Q$H|氜`#s4Eϊ* E"rA{x+s=` vGuE4Y˓ 1ꥠbĂN^I䉇l[R`Omc~t1Wķ>5("SXv $$ hf`t9 Y(Bs9P;z[uLNmUH8#2fMso:m4aqPkx2"8*p-'nIyT־,._qdjvfRPt[Kcq(뭛Q\PF ӷyõ1%&^J}9:  Q! !5δt;DP /dfo) aL/l[dB&(,09޾ER²w1ks1uDʁbӃ/$$BP/cb^e2du.JQ|ʗ\ )r^ r>X.}q0~:Fvlv:suvsS] {]ds:h݌}wTB`kk/Z v|޹[O&x* w"mA6s^t8cWuUpHbkFɩ&<>Qe홤Ә^p=Xn“p3BmCP[,,'#NvngÏbE_.׿凣.Q@ r<_?OS~ŷ]Y4A;Jz"u?=$wՁB+߮P1:jCר ȒEN̓ HNGEH}3geE$ խhƻ [PREcmAkICyl_Po/uޖUFZ7˚~z;!blQ2<;:QN?I]#chւ{ T 9evכ҇FX2R5#RYt@x ]\5՗CJB[2⒪C0% ˵.hE⪭1j~NБ'ǧM*t"WL`fý/ ㆛/WhTPAfc:"&2o rjtYȜ;}Aܻj]C. %{ $űifRFeVu奵k KxQ7B;b32,wؽ{xRk)̂V6ۧauJv@CN@zH#$1(Nek-jh4J}./-\LqI\.FYb12?"FY弟xހHXtL؎f \[Dጏ`<нl3A8^1tJ@f@!vC ^ʂfDwA/1@NZ咝"QJ)0%Ԑ%*Pr;9#\g j ӵK T}$Ť"p݋E'_1!y54<_AhgP\=Q c;J (\*)B*Y'($8$m FvY/ޖ)$Tbֱw% : ,٫T#ޡ; B9ܹA]M}NGskl E$_1V #P:3 ,qxGmFzv\QN$6>$W1H2Y]B69kujeZ'o}'[&]É_E`^r]d6-]o0J&X!lš0ZwծԔLYs> wh*-zo϶\'lFZXaw] Ҽ훡`.7ըr.Ok2Fc B[ƍsivmzw9;oY7%Q:+)TL"Qr]MriBKd>{0W}'`Bo7@{Q-(U{۞ .>Mtjn0ր,HnpD2g'c K߿fQHi Zi1Ԏ;Ykk@|&@}zNqOvI9Ⱦ7+Pun͎>e'DR'2xTD,U^Rvbz6ՁW:X5BZܿV0+s6lA ~_T);wvSnՍOk{%/VF3)2R*trա2S79Q3G7hZ.}4Z[p7T_~ޟj@GQۺrBtF}!x!rL\TLJ!Ъh~drZ!ig]]mW \))ekҳj>;7 #SP1łߖTR:U&njz_Ybze VYԲ|q~ZH/~f$_a.j%w17mZWն_gdN/"ɫխH%7l+Ј:de^׏h3 rArZ<~l^)21sR@\"l`9pTRuuR`{f"+5"-o-l%B,EYe)-("(XVpe$C]tI:DMc!^g3)v|w_4Ƀ.hhr`2i0Za'@1!][sܸ+SzJN#p_ZIm&)Ҍ2Tj{H& 9#˵+pFwl(q JKVYmXI|]֢Ji@W \}urSr~46s"\Vws;NZCQǫIH4D7g鏋ʅ΅x[m͔(ڨlh~_˪g+&NmsyJIK]YY2 }y1hEuA8fͻ/& @1/m:S8j4N)WJh$aZ2CqeʪҚ2yNKw_c"qLe!Z|."OW[>J|N.ҷ%~į_G*#fS5R՛/DE>%~4 6v{Q2a*#4kBIbuuھ߄(PP\H`ta( \Lnx0FNѝ8z!)PC?g3+LlxAa(_]oݰwkW$ 4T?=qL\Y4N kݩƐQ9BF׽mN`G7[i]r7Pg/Vb4bZ/$$1rVqx}s"1|1y=|װ bn|x̃wn߽C>H}GMyЫ0l(f'T({C⠶U>u k kWEȓW| {SC~w"!$3dbȠLZqҲ`B3K _j| y\bF8B,6\3)[z1J(E~4{;;q>\<="i8H~DE} "ጝH>w$ q $$žtl; )!ð 4O9R-ycrgEPDab2/CplWcqU }t_>">s%x9F)g&Jq!GBdbg"#I/3"}Mq8 담:Feׯ8R$x TC3#m#  L)hMѦFIJAé)/UɕӜK%p%%78| %ߖQ{]2$0ppAXGC) /s* j"+46>'ZLs[xWB+FV2Plֳ!{NBQ/,ވ@,A51RKaiK|ޅmȺ8'Sf0T,ymY;H-]h$锣 X`,5Z,d b-UrC GsB@ӡypLYڂji0 @7wzӠ~MY΁YZ"on&%  qțW@Vr{<$MD8͍235.yA5*p1 ydށ#Ƈk~zt)''!@{?9ikw%s<ʼnpL0+mq[ڵZ7*#TW.cbR'oSQG(\N\SQ (u.[$j }1Oȏ71Z5}B< rv4H;uJ`tTָbНjzܚSX &n")#l i14 iqo2[vy;c|Y.q#ƫugbt5~z%s,yñհ^a$JOX[1ѐF={h"NHa[+Ew/%:5tMSγP%YsSJ8=Qkf>?<7Ղ]8:PpzXVhH-JZ(d2\gيH?5\ޅzK ̫׿pD}ZDa(&|}bVju+m>nd&3I@o \/C"ZMټ:\4η|foC5KCoCv=T_hXh?="iO%cxEҀQ$ݧ+EMTHD:vD#l`}5o6' O"iSgsBQ9 Y/A@RpBj[|j]h1Ej3aAep4,{;\*;:A66O1ypBq0g %=vX>l(  à%s"RT>אȅ}\;P#\v<~<#%uGdѧP\8ߔM05T0-zdQ+:S͉J-ڡ4}nx(p4(V\o'$1z(hŝ  ?櫿|}1ZX7Bw<Uƭbwoj)zbY}!ru6!ȴuzM({گwUP5zz@*A>=yU@^1P=*X5DYzҐ :% agw ZTĨN;XE_AYHևpM)AGMYXTĨN;XS"n$2uK?iА[:{N+(]IdPqcP1#(|ѽϭY{[H՛i4?P”8꽝 8JXugUϮq^E8Ai\$ƨXe+1+Es;RP C]ip}(MydA/vs])2R{&VboorU+Ow8)5s}Rᑰ[(rcVIˠ$ea`eRq̎jj~[DlJRQB\|a]>"t̅SEwv BR4{{O!D!L㖍UC4VJ9(-KLfxY*,4;O@> ,'_RAV6ƋL\ehgiEihS=>M>қ fa|pdkrۈa("HJ=[ @4.Ptԩ6p<a_^Xs@ cf` ! (1 Vr>##$1cUM 1<5jyn=p=?WX5fåRc.2VuVR| *ҨKĆA'yvB-JV6DW(T+TDp}K|%@\ǞZg K$7t/.,EU;pvsy.ȅ1P jcIY$E{lwjƘzҊ,I9VMI4gebS 10941ms (13:04:15.317) Jan 30 13:04:15 crc kubenswrapper[5039]: Trace[273093802]: [10.94172971s] [10.94172971s] END Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.318260 5039 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.319417 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.320553 5039 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.995537 5039 apiserver.go:52] "Watching apiserver" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998359 5039 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998609 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.999025 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999224 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999232 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.999655 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999399 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:15.999421 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:15.999959 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004264 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004442 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004551 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004472 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005680 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005684 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005812 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.007230 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.010669 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.012635 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.018744 5039 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023244 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023493 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023631 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023842 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024215 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024453 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024584 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021634 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023614 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024771 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024979 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025069 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025295 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025782 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026031 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026170 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026419 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026535 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026285 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026718 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027136 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027380 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027495 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027648 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027753 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027953 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028994 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029482 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029732 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029856 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029968 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030124 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030266 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030958 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031042 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031079 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031129 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031181 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031203 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031255 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031295 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031341 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031362 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031442 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031501 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031550 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031592 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031613 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031636 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031856 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031879 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031901 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031944 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031969 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031990 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032041 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032151 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032173 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032195 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032216 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032238 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032259 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032301 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032321 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032342 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032362 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032405 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032430 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032455 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032499 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032521 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032543 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032565 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032587 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032632 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032656 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032699 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032721 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032765 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032939 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032962 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032985 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033025 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033097 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033121 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033165 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033232 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033254 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033276 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033328 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033361 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033393 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033431 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033504 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033603 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033635 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033666 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033823 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033905 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033928 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033949 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034041 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034077 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034061 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:21:50.770002434 +0000 UTC Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034153 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034175 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034199 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034248 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034298 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034322 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034345 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034370 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034394 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034420 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034492 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034515 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034603 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034703 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034727 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034822 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034875 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034924 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035000 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035079 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035107 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035138 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035213 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035250 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035318 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035345 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035381 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035414 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035441 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035527 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035543 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035558 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035573 5039 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035602 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035616 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035631 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035644 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035657 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036885 5039 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036927 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036950 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036965 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037078 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037165 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037299 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037507 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037873 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.038182 5039 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.038840 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039104 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039322 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041393 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041760 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041897 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.042022 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.042091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043604 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043722 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043982 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044189 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044314 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.044405 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.544383722 +0000 UTC m=+21.205064949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044738 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044999 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.045223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046113 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046244 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046773 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046968 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.049126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.049369 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.049462 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.549438466 +0000 UTC m=+21.210119773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.049715 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050534 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050554 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050871 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.051336 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.053093 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.055297 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.055563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.058141 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.058427 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.047677 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059903 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059921 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060368 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060575 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.061734 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062075 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062122 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062572 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062829 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063055 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063460 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.063484 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.063527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.563515281 +0000 UTC m=+21.224196508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063664 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064103 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064132 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064151 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064232 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.56419937 +0000 UTC m=+21.224880637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.064576 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.064911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.069601 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.069758 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070291 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070328 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070373 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070562 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070809 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070984 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071089 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071934 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.073789 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.073842 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074135 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074173 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074524 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074593 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074932 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074995 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075130 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075264 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075462 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075530 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075651 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075781 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076027 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076400 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076565 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076641 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076746 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076837 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077102 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077369 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077414 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077755 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078548 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078536 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078811 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078825 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079152 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079365 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079396 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080287 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080881 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080964 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081374 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082446 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082468 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082694 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082854 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082882 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083376 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083383 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.084170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085482 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085737 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085761 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085773 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085836 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.585819196 +0000 UTC m=+21.246500423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085837 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087599 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087675 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088387 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089034 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088989 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089167 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089196 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089404 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089698 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088994 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089879 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.090132 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093633 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093731 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094210 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094248 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094351 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.096607 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097408 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097454 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097508 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.098096 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.098327 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.099435 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.099871 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100113 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100393 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100472 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100485 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100957 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.101159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.101683 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.106891 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.116041 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.117554 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.117970 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.118204 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.118602 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.120847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.120862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.121623 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.125307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.126080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.126883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.127601 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.128262 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.129256 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.129914 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.130776 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.131345 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.133561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.134763 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.135767 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.136403 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.136811 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.137769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.137903 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138051 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138169 5039 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138188 5039 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138200 5039 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138211 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138222 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138234 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138247 5039 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138259 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138271 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138282 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138293 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138303 5039 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138314 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138376 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138388 5039 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138398 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138442 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138454 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138467 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138479 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138492 5039 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138503 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138516 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138527 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138539 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138549 5039 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138560 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138571 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138582 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138593 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138605 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138617 5039 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138627 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138651 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138664 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138675 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138686 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138696 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138707 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138717 5039 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138728 5039 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138740 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138788 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138800 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138811 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138836 5039 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138849 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138991 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139051 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139063 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139075 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139086 5039 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139097 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139108 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139119 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139129 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139139 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139153 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139283 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139163 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139436 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139450 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139460 5039 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139469 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139478 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139488 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139498 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139506 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139515 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139523 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139531 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139539 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139550 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139559 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139567 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139575 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139583 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139591 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139599 5039 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139607 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139620 5039 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139646 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139668 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139699 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139714 5039 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139726 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139738 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139750 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139786 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139798 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139809 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139820 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139831 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139843 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139872 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139883 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139894 5039 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139905 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139917 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139946 5039 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139957 5039 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139967 5039 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139978 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139990 5039 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140001 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140038 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140050 5039 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140060 5039 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140070 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140080 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140108 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140120 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140131 5039 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140144 5039 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140157 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140186 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140195 5039 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140206 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140217 5039 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140242 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140271 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140281 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140385 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140399 5039 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140785 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141401 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141416 5039 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141428 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141437 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141447 5039 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141475 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141485 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141494 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141504 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141512 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141521 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141530 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141555 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141564 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141572 5039 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141580 5039 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141596 5039 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141604 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141612 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141638 5039 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141652 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141663 5039 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141672 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141581 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141718 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141732 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141740 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141749 5039 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141757 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141765 5039 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141773 5039 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141781 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141790 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141798 5039 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141808 5039 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141817 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141825 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141833 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141840 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141849 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141858 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141867 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141875 5039 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141883 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141891 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141898 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141906 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141914 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141921 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141929 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141937 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141945 5039 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.142114 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.142970 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.143618 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.144072 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.145112 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.145810 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.146505 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.146767 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148492 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148919 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148963 5039 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.149146 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.150825 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.151475 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.151970 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.152773 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.154048 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.154772 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.155391 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.156144 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.156888 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.157114 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.157434 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158244 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158814 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158954 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.159638 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.160197 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.160810 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.161415 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.162317 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.165879 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.174703 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.184257 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.184732 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.196232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.197291 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.198086 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.199594 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.200612 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.201567 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.201865 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.212499 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.220319 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.237763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246386 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246423 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246435 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246447 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.257629 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.258432 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.276301 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.293569 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.303124 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.320216 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: W0130 13:04:16.330493 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961 WatchSource:0}: Error finding container 51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961: Status 404 returned error can't find the container with id 51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961 Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.334797 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.343869 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: W0130 13:04:16.360971 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516 WatchSource:0}: Error finding container fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516: Status 404 returned error can't find the container with id fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516 Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.549128 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.549371 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.549337043 +0000 UTC m=+22.210018320 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.633029 5039 csr.go:261] certificate signing request csr-jwsz7 is approved, waiting to be issued Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.640791 5039 csr.go:257] certificate signing request csr-jwsz7 is issued Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650166 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650230 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650332 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650366 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650387 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650400 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650417 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650395796 +0000 UTC m=+22.311077053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650450 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650434467 +0000 UTC m=+22.311115734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650335 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650465 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650511 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650527 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650491 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650482048 +0000 UTC m=+22.311163345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650603 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650591401 +0000 UTC m=+22.311272828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.004595 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m8wkh"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.005047 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.006843 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.007005 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.007849 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.019874 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.031770 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.034932 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:52:35.226738989 +0000 UTC Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.043713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.053974 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.054413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.054481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.066779 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.077044 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.087437 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.093036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.093233 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.107864 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155687 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.176485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.229140 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.229622 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.230908 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" exitCode=255 Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.230977 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.231060 5039 scope.go:117] "RemoveContainer" containerID="0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.231984 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.232082 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d247b8a3d3ddca289413ef2b736c27ab4d4fc9f90fc50c736cf5435b29c785d5"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.236337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.250750 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.264962 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.279223 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.293052 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.305287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.314230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.317466 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.328165 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1070da_c6b8_4c78_a94e_27930ad6701c.slice/crio-8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a WatchSource:0}: Error finding container 8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a: Status 404 returned error can't find the container with id 8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.346234 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.346438 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.346885 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.351371 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.375854 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.400374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.411487 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rmqgh"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.411849 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.412214 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t2btn"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.412706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414557 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414788 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414946 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415376 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415424 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415444 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415563 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415854 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.418974 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419066 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419144 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419264 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rp9bm"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419384 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419638 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419758 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419836 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419853 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419947 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419983 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.420082 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.422036 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.439790 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.439977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.457854 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459708 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459806 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459861 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459934 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460043 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460119 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460152 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460203 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460297 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460366 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460429 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460456 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460488 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460516 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460596 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460656 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460682 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460704 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460758 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460786 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460875 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461051 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461088 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461115 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461141 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.480564 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.494476 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.507455 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.523530 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.542055 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.558803 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561497 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561633 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.561699 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.561666013 +0000 UTC m=+24.222347250 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561726 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561747 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561806 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561821 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561841 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561871 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561888 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561965 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561987 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562005 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562037 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562052 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562076 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562119 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562160 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562110 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562149 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562218 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562251 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562309 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562387 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562443 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562536 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562573 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562537 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562605 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562643 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562607 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562781 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562786 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562649 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562364 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562859 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562846 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562893 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562925 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562944 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562961 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563000 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563023 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563053 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562993 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563094 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563229 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563269 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563320 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563319 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563351 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563368 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563409 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563394 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563433 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563623 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.565967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.566072 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.566940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.575094 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.580050 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.581838 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.583982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.583982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.600420 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.619944 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.632602 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.642766 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 12:59:16 +0000 UTC, rotation deadline is 2026-11-23 14:20:17.808408647 +0000 UTC Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.642990 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7129h16m0.165422037s for next certificate rotation Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.646343 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.660088 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663737 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663760 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663846 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663882 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663882 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663908 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663921 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663933 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663894 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663951 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663908 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663888806 +0000 UTC m=+24.324570033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663970 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663961978 +0000 UTC m=+24.324643205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663983 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663976459 +0000 UTC m=+24.324657686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663993 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663987899 +0000 UTC m=+24.324669126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.690444 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.706546 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.724047 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.725133 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.728209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.737469 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.744785 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43aaddc4_968e_4db3_9f57_308a87d0dbb5.slice/crio-283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817 WatchSource:0}: Error finding container 283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817: Status 404 returned error can't find the container with id 283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817 Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.752421 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.755450 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eda5a3d_fbea_4f7d_98fb_ea8d0f5d7c1f.slice/crio-f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd WatchSource:0}: Error finding container f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd: Status 404 returned error can't find the container with id f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.770412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.800382 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.817248 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.841846 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.035364 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:45:37.1095083 +0000 UTC Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.092792 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.092833 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.092923 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.093099 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.096484 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.097298 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.097924 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.098657 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.099304 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.239699 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8wkh" event={"ID":"2d1070da-c6b8-4c78-a94e-27930ad6701c","Type":"ContainerStarted","Data":"30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.239770 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8wkh" event={"ID":"2d1070da-c6b8-4c78-a94e-27930ad6701c","Type":"ContainerStarted","Data":"8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241544 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" exitCode=0 Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241605 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243099 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243122 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.244372 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.246242 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.246360 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.251485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.251553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"d73be27e53722862f6021319963bf5f9fc1da5a784e3a3f08c290cd84e4e9e5d"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.253719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.253745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"9e89f85ea8e64495e0734c44ad31f15c79648aa70b6d3baa5da7b74029a95e49"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.262486 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.286191 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.298468 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.316366 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.332493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.359526 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.397325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.411297 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.422508 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.441822 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.458904 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.473095 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.491411 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.506819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.528096 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.555040 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.574382 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.592208 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.603446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.620522 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.634181 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.641790 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.645077 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.647119 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.650788 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.662821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.675644 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.692169 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.711543 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.726831 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.748532 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.770276 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.782316 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.793830 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.811980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.829696 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.846719 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.856493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.867819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.884594 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.895999 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.905724 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.926211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.036265 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:53:09.538715642 +0000 UTC Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.093498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.093641 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.258281 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261455 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261705 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261785 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261862 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.263049 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d" exitCode=0 Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.263134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.272266 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.284673 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.299510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.340671 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.369609 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.385689 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.439164 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.459211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.471135 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.481599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.490716 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.504652 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.517644 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.526926 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.537176 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.564770 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.584356 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.584504 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.584480904 +0000 UTC m=+28.245162131 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.607178 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.648077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685119 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685167 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685238 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685252 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685285 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685300 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685313 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.68529343 +0000 UTC m=+28.345974697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685334 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.68532323 +0000 UTC m=+28.346004457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685263 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685356 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685366 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685394 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.685385592 +0000 UTC m=+28.346066899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685263 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685429 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.685420033 +0000 UTC m=+28.346101270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.727917 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g4tnt"] Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.728291 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.733894 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.738291 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.757871 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.777461 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.785940 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.786021 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.786053 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.796922 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.847147 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887350 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887398 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887704 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.888357 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.932666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.947824 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.992957 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.027958 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.036381 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:47:49.783655706 +0000 UTC Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.039680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:20 crc kubenswrapper[5039]: W0130 13:04:20.056622 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773bceff_9225_40fa_9d23_50db3f74fb37.slice/crio-5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c WatchSource:0}: Error finding container 5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c: Status 404 returned error can't find the container with id 5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.069653 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.092713 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.092733 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:20 crc kubenswrapper[5039]: E0130 13:04:20.092858 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:20 crc kubenswrapper[5039]: E0130 13:04:20.092950 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.104305 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.149465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.187649 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.228120 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.269842 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.272634 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.273920 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4tnt" event={"ID":"773bceff-9225-40fa-9d23-50db3f74fb37","Type":"ContainerStarted","Data":"5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.284985 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.309172 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.347233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.387114 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.431085 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.473288 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.511591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.547808 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.583976 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.628275 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.667557 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.713093 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.745698 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.787451 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.825391 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.868167 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.910441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.950591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.985606 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.023050 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.036475 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:52:09.320734806 +0000 UTC Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.070827 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.093071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.093175 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.104700 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.147591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.186491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.225520 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.270494 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.288925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4tnt" event={"ID":"773bceff-9225-40fa-9d23-50db3f74fb37","Type":"ContainerStarted","Data":"7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.290878 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc" exitCode=0 Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.290921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.304388 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.347124 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.388163 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.425925 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.468248 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.504675 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.551479 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.585977 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.626913 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.667982 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.712329 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.719731 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722052 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722314 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.746397 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.798287 5039 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.798583 5039 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799629 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.816415 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821393 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.830328 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.838701 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.841975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842072 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.863683 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.865318 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868966 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.883186 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886333 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.897897 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.898083 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899627 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.909321 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.947211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.985794 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001799 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.037068 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:08:28.700593011 +0000 UTC Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.093117 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.093185 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:22 crc kubenswrapper[5039]: E0130 13:04:22.093264 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:22 crc kubenswrapper[5039]: E0130 13:04:22.093505 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104173 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207384 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.295592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.303170 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.330103 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.343465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.372283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.385734 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.409593 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413322 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413361 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.424439 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.437713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.447279 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.463557 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.478969 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.489645 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.501374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516357 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516872 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.551402 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.588462 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823265 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925859 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925994 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028708 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.037761 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:53:49.528418015 +0000 UTC Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.092721 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.092893 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130700 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.232933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.232990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233006 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233084 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.309195 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9" exitCode=0 Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.309307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.316787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.317054 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.317090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.343491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.358185 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.363707 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.363761 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.372349 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.387691 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.411492 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.428190 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438467 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.444244 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.452656 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.468880 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.481283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.492693 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.503713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.518228 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.532364 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.547039 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548486 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.558425 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.570358 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.580241 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.591046 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.601176 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.612411 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.622806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.623119 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.623094206 +0000 UTC m=+36.283775453 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.624197 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.641736 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650374 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.659443 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.671819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.685170 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.700588 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.717578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723826 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723882 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.723982 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.723994 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724027 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724037 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724077 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724055466 +0000 UTC m=+36.384736713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724088 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724102 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724089637 +0000 UTC m=+36.384770884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724118 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724132 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724088 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724183 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724163289 +0000 UTC m=+36.384844516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724261761 +0000 UTC m=+36.384943038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.745250 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.752980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753028 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753039 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.785032 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855473 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958876 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.959058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.959071 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.037959 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:33:14.678970421 +0000 UTC Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061987 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.092560 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.092616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:24 crc kubenswrapper[5039]: E0130 13:04:24.092807 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:24 crc kubenswrapper[5039]: E0130 13:04:24.092899 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165872 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.322981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.323062 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.337928 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.348615 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.359220 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.368106 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371129 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371142 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.395324 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.410512 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.437823 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.452855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.473633 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475115 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475137 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.485545 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.499712 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.509773 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.526282 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.540092 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.562423 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577629 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577731 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679893 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679963 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.783374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784730 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887249 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887798 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887946 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.888129 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.038855 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:55:58.257597154 +0000 UTC Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:25 crc kubenswrapper[5039]: E0130 13:04:25.092692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092946 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092979 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195375 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195441 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195453 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.299143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.299304 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.331757 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9" exitCode=0 Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.331995 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.332268 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.347064 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.365084 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.377516 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.395916 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402138 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.412827 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.430680 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.449000 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.472725 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.491906 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.502910 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505514 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505609 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.525829 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.542184 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.559380 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.579586 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.604630 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608627 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608656 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.609363 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:25 crc kubenswrapper[5039]: E0130 13:04:25.609504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712987 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.713023 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.713044 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816415 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816482 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816518 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.826603 5039 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924619 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026854 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026949 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.039096 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:28:53.522596284 +0000 UTC Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.093400 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:26 crc kubenswrapper[5039]: E0130 13:04:26.093528 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.093844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:26 crc kubenswrapper[5039]: E0130 13:04:26.093921 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.112924 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.129357 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130356 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130391 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.144884 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.159134 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.170375 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.183107 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.199186 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.212574 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.227754 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232893 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.252724 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.273029 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.285236 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.294461 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.307819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.322054 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334974 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.338065 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.359262 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.371056 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.382567 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.393469 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.411482 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.423771 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437247 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437256 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437280 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.438215 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.449255 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.470258 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.487490 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.501528 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.513119 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.526222 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540188 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540261 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540327 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.547273 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.560843 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643744 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747098 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747178 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747250 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850639 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953760 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.039625 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:34:56.722625376 +0000 UTC Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057419 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.093392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:27 crc kubenswrapper[5039]: E0130 13:04:27.093722 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161369 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264136 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264174 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471442 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574536 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574553 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677199 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781446 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885367 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987991 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.040565 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:13:56.408000439 +0000 UTC Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090794 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090814 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090829 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.093132 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.093315 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:28 crc kubenswrapper[5039]: E0130 13:04:28.093488 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:28 crc kubenswrapper[5039]: E0130 13:04:28.093652 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193452 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.346661 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc" exitCode=0 Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.346710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.362130 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.375226 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.387815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397694 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397703 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.401888 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.414609 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.427530 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.444895 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.463495 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.475985 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.485202 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.497867 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502787 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.509882 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.521511 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.534320 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.544595 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605336 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708651 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810376 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912983 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.913003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.913038 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015914 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.016072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.016145 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.040755 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:19:01.523755546 +0000 UTC Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.092938 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:29 crc kubenswrapper[5039]: E0130 13:04:29.093145 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119134 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119165 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119178 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221517 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221569 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324277 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.353524 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a" exitCode=0 Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.353642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.359875 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.366106 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" exitCode=1 Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.366188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.367608 5039 scope.go:117] "RemoveContainer" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.375040 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.392873 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.407368 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.424251 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429344 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429391 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.442230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.457255 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.472855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.493765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.519354 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.534325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.547690 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.563850 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.577786 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.591718 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.609442 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.625815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.635976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636057 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636081 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.641458 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.655989 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.673800 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.691915 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.703436 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.733700 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738450 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738508 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738521 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.754531 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.769152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.786419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.809362 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.835396 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840721 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.856839 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.869708 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.887446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943726 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943739 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.041297 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:23:53.034189676 +0000 UTC Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046550 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046581 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.093699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.093737 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:30 crc kubenswrapper[5039]: E0130 13:04:30.093918 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:30 crc kubenswrapper[5039]: E0130 13:04:30.094068 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149386 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149423 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252785 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.355768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356145 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356333 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459547 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.540213 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb"] Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.540634 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.543124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.543124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.561975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562078 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562092 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.567605 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.605404 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.628429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.649090 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664862 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664981 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.665004 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.670872 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.690486 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.709917 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.720980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.722996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723045 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723067 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.739864 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.751576 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767579 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.768536 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.778325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.789201 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.800931 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.811877 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.822763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824088 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824175 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824643 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824782 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.829555 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.841783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.861688 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.869994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870048 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870073 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870083 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: W0130 13:04:30.873223 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555be99e_85b7_4cd5_b799_af8a497e3d3f.slice/crio-353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b WatchSource:0}: Error finding container 353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b: Status 404 returned error can't find the container with id 353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972735 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.041935 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:36:53.677433437 +0000 UTC Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075784 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075793 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.092965 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.093088 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177921 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177962 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.307583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.308494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.308621 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.346481 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.361757 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.377425 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.379103 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.379419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.383954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384006 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384068 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384082 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.399409 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.424230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.431400 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.431457 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.440720 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.456232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.467055 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.480665 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490129 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490259 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.496804 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.507307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.520370 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.530485 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.533255 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.533320 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.533383 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.533461 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:32.033440069 +0000 UTC m=+36.694121366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.545408 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.547715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.555139 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.566159 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.574337 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593085 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593112 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593122 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.633884 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.634271 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.634222904 +0000 UTC m=+52.294904191 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696510 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.734677 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.735308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.734940 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735778 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735826 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735847 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735380 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.735690 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735922 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.735898113 +0000 UTC m=+52.396579370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736135 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.736097078 +0000 UTC m=+52.396778335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.736172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736323 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735797 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.737121 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736982 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.736960891 +0000 UTC m=+52.397642148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.737541 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.737508505 +0000 UTC m=+52.398189812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799637 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902966 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902992 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.903103 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.903128 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006695 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006716 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.041074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.041373 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.041493 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:33.041460143 +0000 UTC m=+37.702141410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.042425 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:24:35.938476158 +0000 UTC Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.093075 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.093268 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.093872 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109194 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109242 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211807 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211871 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211903 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238080 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238206 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238234 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.254203 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258561 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258621 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.270790 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273739 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273753 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273764 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.284334 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287873 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.299935 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303664 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303722 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.315363 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.315680 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317584 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.388060 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.393063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.395230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.410387 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420602 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420635 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.425140 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.442257 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.458095 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.483681 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.500074 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.514873 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523385 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523449 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.531639 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.550596 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.566116 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.579529 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.593398 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.615073 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625630 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625641 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.628077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.642855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.654041 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.666108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.727971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728026 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728038 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728064 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831430 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933861 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933948 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036454 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.042703 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:35:23.870010927 +0000 UTC Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.052836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.052946 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.053000 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:35.052981712 +0000 UTC m=+39.713662939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.092530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.092540 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.092654 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.092783 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138491 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240811 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343802 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343856 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343868 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343902 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.399886 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.400468 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404165 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" exitCode=1 Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404244 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404480 5039 scope.go:117] "RemoveContainer" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.406191 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.406692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.408760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.421263 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.434578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.446950 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.446991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447029 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447042 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.452146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.466496 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.491885 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.503821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.513681 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.525303 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.541312 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549360 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.554346 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.564233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.572488 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.588230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.598468 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.609587 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.621269 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.631493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651096 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753910 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.856956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857027 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857054 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960736 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.043496 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:48:04.218903925 +0000 UTC Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063274 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063324 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.092561 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.092649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.092749 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.093048 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166389 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269820 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.270139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.270270 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.373432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.373791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.414980 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.420295 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.420710 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.439046 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.455258 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.467271 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477282 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477411 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.488582 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.498605 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.511150 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.521337 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.533278 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.552108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.569057 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578986 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.579024 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.579038 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.581418 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.593417 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.611539 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.624281 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.639412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.654955 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.674485 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681801 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681812 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681841 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.688658 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.699620 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.709900 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.721986 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.737349 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.756381 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.775921 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784788 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784818 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.788657 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.800121 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.817997 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.830159 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.844099 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.858441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.874220 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887797 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887900 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.890044 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.904075 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.916930 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990391 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990408 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990438 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.044563 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:11:00.227875319 +0000 UTC Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.073683 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.073987 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.074201 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:39.074171257 +0000 UTC m=+43.734852534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.092715 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.092748 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.092810 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.092931 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093368 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093379 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196372 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196471 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303149 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.406036 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.508978 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509369 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509518 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509836 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613184 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613205 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717165 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717230 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819519 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.820100 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.820191 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924342 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.925146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.925296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028290 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028403 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.045549 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:43:32.281999502 +0000 UTC Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.093076 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.093123 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:36 crc kubenswrapper[5039]: E0130 13:04:36.093766 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:36 crc kubenswrapper[5039]: E0130 13:04:36.093797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.107481 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.126498 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131358 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131448 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.148807 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.167177 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.188177 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.204250 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.216990 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.230787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233295 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.241051 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.253141 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.265886 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.280374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.295267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.304966 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.316374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.327478 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335202 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335238 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.339259 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437325 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437370 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540776 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.644133 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747645 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850471 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850526 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.953843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954284 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954822 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.046971 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:05:45.773271114 +0000 UTC Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057417 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.092802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.092883 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:37 crc kubenswrapper[5039]: E0130 13:04:37.092940 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:37 crc kubenswrapper[5039]: E0130 13:04:37.093074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160491 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160660 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263762 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263834 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263859 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263877 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366752 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469560 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469612 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573872 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573987 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677537 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780248 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780365 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883560 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883604 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986359 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.047199 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:12:19.242259969 +0000 UTC Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089430 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089495 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089524 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.092811 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.092891 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:38 crc kubenswrapper[5039]: E0130 13:04:38.093005 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:38 crc kubenswrapper[5039]: E0130 13:04:38.093190 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.093797 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.191538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.191922 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192698 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295564 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295623 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398483 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.435063 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.437268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.437629 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.459906 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.476181 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501637 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.503601 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.517750 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.533764 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.548640 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.568937 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.584765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604690 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.618106 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.636330 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.652510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.671454 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.694087 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707490 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.709287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.728101 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.742518 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.757047 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810507 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913610 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016066 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016632 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.048350 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:12:46.923545001 +0000 UTC Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.092635 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.092762 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.092631 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.093407 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.115567 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.115739 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.115790 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.115776841 +0000 UTC m=+51.776458068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.118880 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119180 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119243 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119331 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222577 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222723 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.326027 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.430030 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.430117 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533510 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636008 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636142 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636160 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739712 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842897 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842971 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946417 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946437 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946468 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946487 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049134 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:48:44.27524056 +0000 UTC Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049932 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049949 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.093456 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.093558 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:40 crc kubenswrapper[5039]: E0130 13:04:40.093705 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:40 crc kubenswrapper[5039]: E0130 13:04:40.093829 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152965 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256386 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359322 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359357 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461684 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461786 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564879 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564938 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668349 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874289 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874338 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977805 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.049564 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:32:42.699716801 +0000 UTC Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081345 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.093598 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:41 crc kubenswrapper[5039]: E0130 13:04:41.093764 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.094279 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:41 crc kubenswrapper[5039]: E0130 13:04:41.094498 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184158 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184352 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288217 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391573 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598226 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598316 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702209 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702290 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805226 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805277 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908597 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011956 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.050451 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:06:39.688980455 +0000 UTC Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.092546 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.092741 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.093385 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.093508 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.115973 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116368 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220562 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323138 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323170 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391675 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391692 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.411121 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417788 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417815 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.440659 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446751 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.462173 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465969 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465995 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.466025 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.483992 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488452 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.508467 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.508629 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510461 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510481 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613217 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716437 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716460 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819528 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819715 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922804 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026233 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026263 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026319 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.050579 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:28:24.856829023 +0000 UTC Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.092571 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:43 crc kubenswrapper[5039]: E0130 13:04:43.092762 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.092573 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:43 crc kubenswrapper[5039]: E0130 13:04:43.092996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129516 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232382 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232406 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232424 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335580 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335737 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439653 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439687 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752601 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752940 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856905 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856936 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.051530 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:06:56.193342591 +0000 UTC Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.093377 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.093438 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:44 crc kubenswrapper[5039]: E0130 13:04:44.093539 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:44 crc kubenswrapper[5039]: E0130 13:04:44.093650 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165685 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267887 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267898 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.371222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.371443 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474612 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474655 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576510 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576660 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680673 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783712 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886809 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886944 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989695 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989715 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.052238 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:15:37.260658496 +0000 UTC Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.091930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092004 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092040 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092056 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092470 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092470 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:45 crc kubenswrapper[5039]: E0130 13:04:45.092624 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:45 crc kubenswrapper[5039]: E0130 13:04:45.092561 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.194973 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195103 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195123 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298106 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298308 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402455 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.504972 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505061 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505115 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505137 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609276 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609359 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609412 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609433 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712614 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815251 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924577 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.992817 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.993658 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027781 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027801 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027843 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.052957 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:32:12.517923207 +0000 UTC Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.093045 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:46 crc kubenswrapper[5039]: E0130 13:04:46.093176 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.093593 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:46 crc kubenswrapper[5039]: E0130 13:04:46.093821 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.110429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.127907 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130427 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.144247 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.155662 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.177510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.190857 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.203869 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.215429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233639 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233687 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233736 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.241128 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.255563 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.274299 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.288686 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.306066 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.324752 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335793 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335894 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.338980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.355283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.365287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438537 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.467685 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.470396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.470918 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.482810 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.493850 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.507882 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.518160 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.531086 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543605 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543655 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543683 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.544246 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.556596 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.568002 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.585787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.600158 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.611583 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.628721 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645779 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.647078 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.658267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.703213 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.713145 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.729672 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748318 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850687 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850700 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953574 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.054550 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:23:15.945959203 +0000 UTC Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056121 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056156 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.092490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.092541 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.092701 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.092882 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.158992 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159157 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.211279 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.211786 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.211960 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:05:03.211937892 +0000 UTC m=+67.872619199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262029 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262062 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262097 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364957 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364984 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.467760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468415 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.475057 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.475907 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478695 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" exitCode=1 Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478741 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478786 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.479699 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.480423 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.502166 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.519457 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.531669 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.542786 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.563945 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571046 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571424 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.578637 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.591853 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.602858 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.619847 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.633531 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.645733 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.658204 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.669550 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674093 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674705 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.684158 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.697599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.712518 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.716624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.716913 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.716893035 +0000 UTC m=+84.377574262 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.727891 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777528 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777605 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777619 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817691 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817753 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817804 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817845 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817806 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817967 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.817948127 +0000 UTC m=+84.478629354 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817982 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818083 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818103 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818074951 +0000 UTC m=+84.478756218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818110 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817894 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818136 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818155 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818166 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818226 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818198154 +0000 UTC m=+84.478879381 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818258 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818251895 +0000 UTC m=+84.478933122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880870 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984343 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984355 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.031614 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.043286 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.046467 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.055259 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:20:41.484279269 +0000 UTC Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.058578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.071101 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.082793 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086430 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086441 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086470 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.092555 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.092646 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.092690 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.092721 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.093964 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.104964 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.115822 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.126356 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.138859 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.150655 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.162121 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.177792 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.188972 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189023 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189051 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189061 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.191738 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.203340 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.217420 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.238108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291286 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.295232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394525 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394534 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394558 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.483419 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.486737 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.486929 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496373 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.501956 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.521405 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.540335 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.552512 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.583845 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599123 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599241 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.607938 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.629238 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.642988 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.670853 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.684417 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.698054 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702959 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.713614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.731559 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.748001 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.764277 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.778769 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.791603 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.804925 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805885 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909694 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012857 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.056916 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:22:41.819031928 +0000 UTC Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.092513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.092533 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:49 crc kubenswrapper[5039]: E0130 13:04:49.092683 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:49 crc kubenswrapper[5039]: E0130 13:04:49.092771 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115198 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217230 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217757 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217841 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321395 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424371 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527431 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631371 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631499 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631529 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838744 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838778 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942306 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044589 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044659 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.057971 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:44:48.40042122 +0000 UTC Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.093577 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.093616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:50 crc kubenswrapper[5039]: E0130 13:04:50.094126 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:50 crc kubenswrapper[5039]: E0130 13:04:50.093915 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148439 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.251962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252113 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355386 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355446 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458762 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458867 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562694 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562792 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562815 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.563376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.563598 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666750 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666762 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769800 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.872996 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873136 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873181 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976394 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976413 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.059360 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:59:02.035880732 +0000 UTC Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079782 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.093060 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.093167 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:51 crc kubenswrapper[5039]: E0130 13:04:51.093225 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:51 crc kubenswrapper[5039]: E0130 13:04:51.093405 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183270 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285689 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388131 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388140 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491562 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491579 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.568804 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.586896 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.603616 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.615556 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.636408 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.652146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.668173 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.685227 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695677 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.703042 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.716913 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.752848 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.784000 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798135 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.803142 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.818732 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.834663 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.846378 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.858233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.867758 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.879385 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900188 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.004043 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.060600 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:14:41.288212953 +0000 UTC Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.093233 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.093271 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.093673 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209399 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209449 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312515 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312561 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415959 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.416040 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.416057 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519125 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519282 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621783 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725284 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725345 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760465 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.777891 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784264 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.801972 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806697 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.828361 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.850861 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855358 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.874623 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.874777 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877103 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979548 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979657 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.061663 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:02:59.28171357 +0000 UTC Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082256 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082291 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.092772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.092807 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:53 crc kubenswrapper[5039]: E0130 13:04:53.092912 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:53 crc kubenswrapper[5039]: E0130 13:04:53.092987 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185867 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288750 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391665 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495166 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495213 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598233 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804729 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907982 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.908000 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010654 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.062826 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:33:39.292894172 +0000 UTC Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.093097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.093255 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:54 crc kubenswrapper[5039]: E0130 13:04:54.093465 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:54 crc kubenswrapper[5039]: E0130 13:04:54.093655 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112889 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422708 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530658 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530897 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634462 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737172 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840836 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943629 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.046920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.046980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047168 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.063178 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:22:15.078501203 +0000 UTC Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.092827 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:55 crc kubenswrapper[5039]: E0130 13:04:55.092977 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.093131 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:55 crc kubenswrapper[5039]: E0130 13:04:55.093308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150750 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150778 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253578 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.355923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356104 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459368 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562970 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670398 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875875 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875898 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.064349 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:32:39.337533905 +0000 UTC Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082895 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082922 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.092627 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.092652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:56 crc kubenswrapper[5039]: E0130 13:04:56.092919 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:56 crc kubenswrapper[5039]: E0130 13:04:56.092797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.116527 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.129141 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.143416 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.154562 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.165515 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.177313 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185874 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.187879 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.203815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.224528 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.234117 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.248323 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.261713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.275765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289071 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289171 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.296560 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.324215 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.350146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.364234 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.377396 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390949 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390966 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.391050 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492978 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.493003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.493036 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595692 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699277 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802861 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802878 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906473 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906606 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009394 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009479 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009520 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.064937 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:33:35.868496992 +0000 UTC Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.092945 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.093039 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:57 crc kubenswrapper[5039]: E0130 13:04:57.093111 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:57 crc kubenswrapper[5039]: E0130 13:04:57.093209 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112469 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.215970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216152 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318917 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318967 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422172 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422246 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422331 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525476 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525514 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525531 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.628937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.628985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629080 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732355 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732373 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.834957 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.834998 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835076 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938325 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938483 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041124 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041180 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.065539 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:57:18.515481299 +0000 UTC Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.093133 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.093134 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:58 crc kubenswrapper[5039]: E0130 13:04:58.093387 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:58 crc kubenswrapper[5039]: E0130 13:04:58.093510 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.144949 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145055 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145118 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248668 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352329 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454784 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454886 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557802 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557814 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557832 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557844 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660423 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764441 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867248 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867286 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867304 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969476 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969505 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.066098 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:48:36.948949209 +0000 UTC Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072779 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072873 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072935 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.093464 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.093491 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:59 crc kubenswrapper[5039]: E0130 13:04:59.093648 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:59 crc kubenswrapper[5039]: E0130 13:04:59.093784 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175785 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175869 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278792 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278886 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278942 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382473 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485537 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485968 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589723 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692443 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692462 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796385 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796410 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796470 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899401 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899457 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001561 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.066700 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:33:17.674033685 +0000 UTC Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.093313 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.093398 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:00 crc kubenswrapper[5039]: E0130 13:05:00.093523 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:00 crc kubenswrapper[5039]: E0130 13:05:00.093773 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104676 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208082 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208188 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208212 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208229 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311818 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311882 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311906 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415412 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415529 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524454 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.526297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.526364 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630552 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734461 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837503 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837564 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837623 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940830 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043590 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043656 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.066850 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:14:53.622889939 +0000 UTC Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.093475 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.093496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.093665 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.094905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.095620 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.095819 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147203 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147216 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249810 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249827 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353194 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353224 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455388 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455426 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558392 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558405 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661307 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661324 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764832 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868208 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970844 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970907 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970966 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.068134 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 12:52:47.13930108 +0000 UTC Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073112 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073138 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.093083 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.093188 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:02 crc kubenswrapper[5039]: E0130 13:05:02.093256 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:02 crc kubenswrapper[5039]: E0130 13:05:02.093375 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175681 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277777 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380232 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483335 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483402 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483419 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586512 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689443 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689572 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792479 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792524 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.895994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896119 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896180 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998558 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998586 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.069340 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:34:13.416930729 +0000 UTC Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.092680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.092764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.092823 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.092884 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100855 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100926 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100936 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180172 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.196681 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.201951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202005 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202073 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202089 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.217508 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221188 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.238111 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241613 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241661 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.257288 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260968 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.261009 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.261065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.276666 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.276949 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279474 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.294984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.295224 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.295354 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:05:35.295325141 +0000 UTC m=+99.956006408 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381744 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381757 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.484979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485033 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485068 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.586988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587158 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688911 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791444 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791468 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791478 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893186 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893194 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995801 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.069638 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:02:10.586460142 +0000 UTC Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.093053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.093053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:04 crc kubenswrapper[5039]: E0130 13:05:04.093232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:04 crc kubenswrapper[5039]: E0130 13:05:04.093289 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097185 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097234 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097243 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199113 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199173 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302100 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302142 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302154 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405699 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509430 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611839 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714567 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817761 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921352 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023983 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.024196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.024345 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.070568 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:32:11.585434818 +0000 UTC Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.092971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.093070 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:05 crc kubenswrapper[5039]: E0130 13:05:05.093548 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:05 crc kubenswrapper[5039]: E0130 13:05:05.093546 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.126928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127062 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127128 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229474 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332166 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332241 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332280 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.435397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.435773 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436510 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436846 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540335 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540393 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540437 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643241 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.745819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.745881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.850054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.850174 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953548 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.954064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.954190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057741 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057812 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057853 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057869 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.071624 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:52:08.489492602 +0000 UTC Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.093071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.093116 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:06 crc kubenswrapper[5039]: E0130 13:05:06.093193 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:06 crc kubenswrapper[5039]: E0130 13:05:06.094162 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.123379 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.142878 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.158244 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159768 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.174874 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.202307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.214829 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.228063 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.240600 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.254246 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261460 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.269287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.286077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.308313 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.320814 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.330787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.345435 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.357950 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.363888 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364539 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.374507 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.385491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.466846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467342 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467628 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570533 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673698 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673751 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776941 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879617 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982764 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.071918 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:42:41.465396349 +0000 UTC Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085314 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085347 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.092758 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:07 crc kubenswrapper[5039]: E0130 13:05:07.092881 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.092758 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:07 crc kubenswrapper[5039]: E0130 13:05:07.093097 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188318 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188328 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291832 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291875 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394620 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.496979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497398 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599880 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599895 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702738 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804602 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804625 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907494 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011209 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.073295 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:33:02.473626351 +0000 UTC Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.092844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.093234 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:08 crc kubenswrapper[5039]: E0130 13:05:08.093335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:08 crc kubenswrapper[5039]: E0130 13:05:08.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.108582 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113429 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215901 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215934 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318679 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318716 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420785 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.522976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523034 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523074 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562389 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562434 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" exitCode=1 Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562963 5039 scope.go:117] "RemoveContainer" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.576359 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.591686 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.606459 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.615539 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624804 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624838 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624877 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.633940 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.647412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.661304 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.673497 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727026 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727112 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.729828 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.746606 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.765441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.774776 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.787835 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.799839 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.811385 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.824500 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.835465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.846553 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.857325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931848 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931923 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034469 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.074421 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:43:55.532331559 +0000 UTC Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.092924 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:09 crc kubenswrapper[5039]: E0130 13:05:09.093256 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.093277 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:09 crc kubenswrapper[5039]: E0130 13:05:09.093461 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136392 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136413 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238046 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238102 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340784 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442945 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442961 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442974 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546500 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.566411 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.566466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.580091 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.592207 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.602446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.615631 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.625504 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.635584 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.643956 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648775 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648847 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648874 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.654553 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.667006 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.677801 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.689821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.700284 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.710780 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.720440 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.737496 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751473 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.754894 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.768513 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.783450 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.798967 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854407 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854471 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854513 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956703 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063135 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063144 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.075463 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:53:26.793598286 +0000 UTC Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.092798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.092859 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:10 crc kubenswrapper[5039]: E0130 13:05:10.092902 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:10 crc kubenswrapper[5039]: E0130 13:05:10.092972 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164763 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267383 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267406 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267456 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369616 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471927 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471967 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.472005 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676753 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676798 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676824 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778599 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778646 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778673 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880602 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983624 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.075840 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:03:23.233043805 +0000 UTC Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085525 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085535 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085550 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085561 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.093129 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.093181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:11 crc kubenswrapper[5039]: E0130 13:05:11.093401 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:11 crc kubenswrapper[5039]: E0130 13:05:11.093517 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187467 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187488 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290336 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290376 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392982 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495631 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598742 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.700889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701486 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803568 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906421 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009247 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009378 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.076806 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:45:07.896774982 +0000 UTC Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.093352 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.093375 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:12 crc kubenswrapper[5039]: E0130 13:05:12.093490 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:12 crc kubenswrapper[5039]: E0130 13:05:12.093625 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112145 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112158 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215278 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215308 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318269 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318388 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318410 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.523991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524069 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524104 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.626958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627041 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627082 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627100 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730580 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730607 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.935936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936176 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038819 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.077261 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:11:35.613671244 +0000 UTC Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.092589 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.092595 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.092759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.092944 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142705 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245703 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.328991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329116 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.350299 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353922 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.369875 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373444 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373467 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.388112 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.390956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391071 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.401777 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404669 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.417959 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.418081 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419692 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.522999 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523101 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.625944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.625993 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626005 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626058 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730776 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834312 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.936877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.936990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937589 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039495 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.077883 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:46:01.146388065 +0000 UTC Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.093327 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.093414 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:14 crc kubenswrapper[5039]: E0130 13:05:14.093449 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:14 crc kubenswrapper[5039]: E0130 13:05:14.093562 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.094738 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142237 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142247 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.244876 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.244997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245042 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245069 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347653 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449990 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552278 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552332 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.583124 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.585716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.586159 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.596201 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.605958 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.618407 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.633384 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.647625 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.662243 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.676152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.689920 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.709225 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.721351 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.741711 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.753744 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757177 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757187 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.767213 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.779669 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.795726 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.810423 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.823059 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.833675 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.849412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858929 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960955 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960999 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.961033 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.961044 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063698 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063735 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.078883 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:25:29.636666544 +0000 UTC Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.093201 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.093348 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.093429 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.093593 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166126 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268625 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268640 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371658 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474301 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577387 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.589734 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.590625 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593097 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" exitCode=1 Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593160 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.594274 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.594597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.608088 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.624129 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.642743 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.673464 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.693157 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.704388 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.713763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.730086 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.743564 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.759462 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.771701 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781944 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.782599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.793320 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.805052 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.815871 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.829362 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.847240 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.858182 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.872866 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885777 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989599 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989644 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989665 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.079547 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:56:28.388337186 +0000 UTC Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.092652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.092659 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.092863 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.092920 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093830 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.111043 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.126109 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.145946 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.171441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195459 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195495 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195533 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.201186 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.215113 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.224428 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.245399 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.262303 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.277816 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.290099 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298027 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298437 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.305977 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.318312 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.331614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.345835 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.361252 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.374356 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.385187 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.397603 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400408 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400463 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501783 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501818 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501840 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501849 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.599004 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603471 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603633 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603643 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.603767 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.619831 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.631419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.647270 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.659642 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.680628 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.701640 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706878 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706922 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.719479 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.733614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.749389 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.760516 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.771414 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.780409 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.793679 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.803810 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808873 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808883 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.816847 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.829160 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.838889 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.849819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.857892 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911522 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014882 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.080218 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:01:04.648337212 +0000 UTC Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.092783 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.092782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:17 crc kubenswrapper[5039]: E0130 13:05:17.093062 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:17 crc kubenswrapper[5039]: E0130 13:05:17.093172 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118268 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324613 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324629 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324639 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426534 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426570 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426578 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426600 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528920 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632097 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632166 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734741 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734804 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734839 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837207 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837313 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940654 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940683 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043722 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043800 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.080669 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:40:17.090936998 +0000 UTC Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.093126 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:18 crc kubenswrapper[5039]: E0130 13:05:18.093324 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.093396 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:18 crc kubenswrapper[5039]: E0130 13:05:18.093582 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146905 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146950 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146968 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251225 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353506 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353517 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353543 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455719 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558616 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558654 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665630 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665684 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665693 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.767989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768087 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870898 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973221 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076791 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.081005 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:03:08.495603098 +0000 UTC Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.093401 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.093456 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.093524 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.093663 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179387 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281795 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384504 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486704 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486726 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590238 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590281 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693862 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693965 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693983 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.767654 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.767941 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.767906945 +0000 UTC m=+148.428588222 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796736 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868948 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868987 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869124 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869140 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869133 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869233 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869202623 +0000 UTC m=+148.529883890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869260 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869269 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869251784 +0000 UTC m=+148.529933061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869286 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869133 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869334 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869355 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869373 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869345807 +0000 UTC m=+148.530027074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869410 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869392528 +0000 UTC m=+148.530073795 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038797 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038868 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038879 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.081671 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:28:36.234127284 +0000 UTC Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.093985 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:20 crc kubenswrapper[5039]: E0130 13:05:20.094137 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.094196 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:20 crc kubenswrapper[5039]: E0130 13:05:20.094388 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142320 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142438 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246161 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348901 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348914 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348942 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451713 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554297 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656831 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656857 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759693 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862788 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965290 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965384 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067689 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067733 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.082303 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:01:57.343284615 +0000 UTC Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.092510 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.092606 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:21 crc kubenswrapper[5039]: E0130 13:05:21.092628 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:21 crc kubenswrapper[5039]: E0130 13:05:21.092759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170445 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273209 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273222 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478215 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478244 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580273 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580321 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.682997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683273 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786519 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786607 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889151 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889276 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889327 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992113 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992123 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992178 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.083327 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:39:11.912472075 +0000 UTC Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.096048 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:22 crc kubenswrapper[5039]: E0130 13:05:22.096294 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.096458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:22 crc kubenswrapper[5039]: E0130 13:05:22.096616 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097987 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.098003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.098279 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202522 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305767 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305785 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408433 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408512 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512078 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512221 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.615001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.615050 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821337 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028491 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.084122 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:23:07.222303274 +0000 UTC Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.093483 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.093483 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.093673 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.093734 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130838 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234267 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234297 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337711 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441773 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441802 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544125 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544149 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750185 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750199 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796738 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796782 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796805 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.808554 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813328 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.828368 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832609 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.846370 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851255 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851294 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.864550 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869202 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869240 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869282 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.886582 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.886727 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888361 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990486 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.084622 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:34:33.434196453 +0000 UTC Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.092588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:24 crc kubenswrapper[5039]: E0130 13:05:24.092768 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.092784 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:24 crc kubenswrapper[5039]: E0130 13:05:24.092948 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.093895 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.093998 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094100 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197428 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300961 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.301041 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405200 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507562 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507651 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507694 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.610941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.610988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611076 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713356 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713367 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816585 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920039 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024400 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.085375 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:39:51.198052413 +0000 UTC Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.092822 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.092897 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:25 crc kubenswrapper[5039]: E0130 13:05:25.093051 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:25 crc kubenswrapper[5039]: E0130 13:05:25.093263 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127306 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127348 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230117 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333230 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436159 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538781 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538793 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641906 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744790 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847098 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847142 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950273 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052959 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.086450 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:08:09.51616309 +0000 UTC Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.092877 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.092906 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:26 crc kubenswrapper[5039]: E0130 13:05:26.093114 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:26 crc kubenswrapper[5039]: E0130 13:05:26.093246 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.108247 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.126446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.145152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156126 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156206 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156241 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.159449 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.178961 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.210948 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.235828 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.250282 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258727 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.263713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.282628 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.297815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.313267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.326533 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.339942 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.354630 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361369 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361391 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361436 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.367045 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.378087 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.391992 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.404648 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463953 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566967 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566982 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.567001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.567043 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669872 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669943 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669959 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669971 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773811 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877461 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980208 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980235 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083903 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083943 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083960 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.086564 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:34:34.896543603 +0000 UTC Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.092976 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:27 crc kubenswrapper[5039]: E0130 13:05:27.093112 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.093681 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:27 crc kubenswrapper[5039]: E0130 13:05:27.093891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186763 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186795 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289885 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289918 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289927 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289953 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393663 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497383 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497531 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601186 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601295 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704540 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806733 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910674 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910710 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013467 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013505 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.087061 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:58:27.48841485 +0000 UTC Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.093584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.094054 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:28 crc kubenswrapper[5039]: E0130 13:05:28.094286 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:28 crc kubenswrapper[5039]: E0130 13:05:28.094452 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115918 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218857 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218986 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322952 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322993 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426417 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.530955 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531121 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531158 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531195 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634715 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634757 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737649 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737674 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840888 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.943994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944239 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.048075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.048939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049575 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.087975 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:30:40.441889937 +0000 UTC Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.093394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.093394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:29 crc kubenswrapper[5039]: E0130 13:05:29.093856 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:29 crc kubenswrapper[5039]: E0130 13:05:29.093925 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152589 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152612 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152664 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255536 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255591 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358926 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358946 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462208 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462898 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.463068 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566782 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669199 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669215 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669224 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773233 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875964 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979598 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082368 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082379 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.088954 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:01:53.280550136 +0000 UTC Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.093349 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.093469 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:30 crc kubenswrapper[5039]: E0130 13:05:30.093692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:30 crc kubenswrapper[5039]: E0130 13:05:30.093820 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185307 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185371 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185418 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287672 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287689 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390088 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390231 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492535 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595996 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.703655 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704108 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704140 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.807028 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.807040 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909897 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909912 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013133 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013170 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.090069 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:54:15.675099346 +0000 UTC Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.093406 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.093570 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.093655 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.093737 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.094868 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.095128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115624 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115834 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.116140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.116278 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219685 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322578 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322619 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322651 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425119 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425179 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528820 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528887 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.633052 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.633229 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736302 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838410 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838482 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941689 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.043940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044080 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044098 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.096361 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:32 crc kubenswrapper[5039]: E0130 13:05:32.097383 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.097842 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:32 crc kubenswrapper[5039]: E0130 13:05:32.098164 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.098964 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:38:52.15273703 +0000 UTC Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146767 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146784 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.251210 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354092 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354220 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457002 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.458094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.458375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561169 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561209 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663775 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663883 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.771226 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.875572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876558 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.877194 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.980718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981590 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981794 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085801 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.093292 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:33 crc kubenswrapper[5039]: E0130 13:05:33.093966 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:33 crc kubenswrapper[5039]: E0130 13:05:33.093812 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.099850 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:58:39.978965033 +0000 UTC Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189408 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395818 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498918 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601873 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806646 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908964 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012518 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012553 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.092813 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:34 crc kubenswrapper[5039]: E0130 13:05:34.093300 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.092928 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:34 crc kubenswrapper[5039]: E0130 13:05:34.093586 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.100981 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:16:36.534538123 +0000 UTC Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114965 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.115032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.115049 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132675 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132850 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.196377 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc"] Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.197418 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.199202 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.199425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.200059 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.200146 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220133 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220171 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220192 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220221 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220258 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.254890 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m8wkh" podStartSLOduration=78.254869513 podStartE2EDuration="1m18.254869513s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.254807771 +0000 UTC m=+98.915488998" watchObservedRunningTime="2026-01-30 13:05:34.254869513 +0000 UTC m=+98.915550740" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.270842 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" podStartSLOduration=78.270821717 podStartE2EDuration="1m18.270821717s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.270676213 +0000 UTC m=+98.931357450" watchObservedRunningTime="2026-01-30 13:05:34.270821717 +0000 UTC m=+98.931502944" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.286335 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" podStartSLOduration=77.28631567 podStartE2EDuration="1m17.28631567s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.286087364 +0000 UTC m=+98.946768611" watchObservedRunningTime="2026-01-30 13:05:34.28631567 +0000 UTC m=+98.946996897" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.319426 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=26.319406996 podStartE2EDuration="26.319406996s" podCreationTimestamp="2026-01-30 13:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.317980832 +0000 UTC m=+98.978662079" watchObservedRunningTime="2026-01-30 13:05:34.319406996 +0000 UTC m=+98.980088223" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320890 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320971 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.321032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.321801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.331310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.346065 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.350815 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.350800352 podStartE2EDuration="46.350800352s" podCreationTimestamp="2026-01-30 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.337605254 +0000 UTC m=+98.998286501" watchObservedRunningTime="2026-01-30 13:05:34.350800352 +0000 UTC m=+99.011481579" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.350921 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rmqgh" podStartSLOduration=78.350917905 podStartE2EDuration="1m18.350917905s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.350175997 +0000 UTC m=+99.010857254" watchObservedRunningTime="2026-01-30 13:05:34.350917905 +0000 UTC m=+99.011599132" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.376458 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podStartSLOduration=78.376438729 podStartE2EDuration="1m18.376438729s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.362169496 +0000 UTC m=+99.022850723" watchObservedRunningTime="2026-01-30 13:05:34.376438729 +0000 UTC m=+99.037119956" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.411736 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g4tnt" podStartSLOduration=78.411720148 podStartE2EDuration="1m18.411720148s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.411046652 +0000 UTC m=+99.071727899" watchObservedRunningTime="2026-01-30 13:05:34.411720148 +0000 UTC m=+99.072401375" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.489150 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.489130802 podStartE2EDuration="1m18.489130802s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.487347919 +0000 UTC m=+99.148029146" watchObservedRunningTime="2026-01-30 13:05:34.489130802 +0000 UTC m=+99.149812029" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.513500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.588753 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.588726319 podStartE2EDuration="1m17.588726319s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.571403912 +0000 UTC m=+99.232085159" watchObservedRunningTime="2026-01-30 13:05:34.588726319 +0000 UTC m=+99.249407546" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.589172 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.589163119 podStartE2EDuration="1m16.589163119s" podCreationTimestamp="2026-01-30 13:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.58791812 +0000 UTC m=+99.248599347" watchObservedRunningTime="2026-01-30 13:05:34.589163119 +0000 UTC m=+99.249844346" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.659823 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" event={"ID":"9b497d7c-7f0a-4577-8fdc-d18abfc6b605","Type":"ContainerStarted","Data":"7b123ca859dd9854dc9b5599db7ebbe72a7950d40f95f4b990017ccf1952b699"} Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.093063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.093457 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.093746 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.093871 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.101351 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:46:00.630653233 +0000 UTC Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.101425 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.110113 5039 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.348597 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.348838 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.348973 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:06:39.348944118 +0000 UTC m=+164.009625345 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.665213 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" event={"ID":"9b497d7c-7f0a-4577-8fdc-d18abfc6b605","Type":"ContainerStarted","Data":"7d34084c2453cbebbba8b03ad9a6b8c8ffac0e1619019c06a4f6b44d56a8ddd6"} Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.680818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" podStartSLOduration=79.680796696 podStartE2EDuration="1m19.680796696s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:35.679641518 +0000 UTC m=+100.340322745" watchObservedRunningTime="2026-01-30 13:05:35.680796696 +0000 UTC m=+100.341477923" Jan 30 13:05:36 crc kubenswrapper[5039]: I0130 13:05:36.092767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:36 crc kubenswrapper[5039]: I0130 13:05:36.094588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:36 crc kubenswrapper[5039]: E0130 13:05:36.094780 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:36 crc kubenswrapper[5039]: E0130 13:05:36.095052 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:37 crc kubenswrapper[5039]: I0130 13:05:37.093069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:37 crc kubenswrapper[5039]: I0130 13:05:37.093098 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:37 crc kubenswrapper[5039]: E0130 13:05:37.093222 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:37 crc kubenswrapper[5039]: E0130 13:05:37.093366 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:38 crc kubenswrapper[5039]: I0130 13:05:38.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:38 crc kubenswrapper[5039]: I0130 13:05:38.093620 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:38 crc kubenswrapper[5039]: E0130 13:05:38.093749 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:38 crc kubenswrapper[5039]: E0130 13:05:38.093889 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:39 crc kubenswrapper[5039]: I0130 13:05:39.092829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:39 crc kubenswrapper[5039]: E0130 13:05:39.092931 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:39 crc kubenswrapper[5039]: I0130 13:05:39.092829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:39 crc kubenswrapper[5039]: E0130 13:05:39.093038 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:40 crc kubenswrapper[5039]: I0130 13:05:40.093029 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:40 crc kubenswrapper[5039]: E0130 13:05:40.093239 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:40 crc kubenswrapper[5039]: I0130 13:05:40.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:40 crc kubenswrapper[5039]: E0130 13:05:40.093504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:41 crc kubenswrapper[5039]: I0130 13:05:41.093342 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:41 crc kubenswrapper[5039]: I0130 13:05:41.093410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:41 crc kubenswrapper[5039]: E0130 13:05:41.093511 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:41 crc kubenswrapper[5039]: E0130 13:05:41.093598 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:42 crc kubenswrapper[5039]: I0130 13:05:42.093328 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:42 crc kubenswrapper[5039]: I0130 13:05:42.093412 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:42 crc kubenswrapper[5039]: E0130 13:05:42.093522 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:42 crc kubenswrapper[5039]: E0130 13:05:42.093640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:43 crc kubenswrapper[5039]: I0130 13:05:43.093068 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:43 crc kubenswrapper[5039]: E0130 13:05:43.093194 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:43 crc kubenswrapper[5039]: I0130 13:05:43.093068 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:43 crc kubenswrapper[5039]: E0130 13:05:43.093396 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:44 crc kubenswrapper[5039]: I0130 13:05:44.092806 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:44 crc kubenswrapper[5039]: I0130 13:05:44.092858 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:44 crc kubenswrapper[5039]: E0130 13:05:44.093377 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:44 crc kubenswrapper[5039]: E0130 13:05:44.093522 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.093304 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.093310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.093487 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.093699 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.094939 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.095242 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:46 crc kubenswrapper[5039]: I0130 13:05:46.093069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:46 crc kubenswrapper[5039]: E0130 13:05:46.094428 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:46 crc kubenswrapper[5039]: I0130 13:05:46.094501 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:46 crc kubenswrapper[5039]: E0130 13:05:46.094660 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:47 crc kubenswrapper[5039]: I0130 13:05:47.092790 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:47 crc kubenswrapper[5039]: I0130 13:05:47.092853 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:47 crc kubenswrapper[5039]: E0130 13:05:47.092964 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:47 crc kubenswrapper[5039]: E0130 13:05:47.093145 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:48 crc kubenswrapper[5039]: I0130 13:05:48.093493 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:48 crc kubenswrapper[5039]: I0130 13:05:48.093505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:48 crc kubenswrapper[5039]: E0130 13:05:48.093693 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:48 crc kubenswrapper[5039]: E0130 13:05:48.093837 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:49 crc kubenswrapper[5039]: I0130 13:05:49.093474 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:49 crc kubenswrapper[5039]: I0130 13:05:49.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:49 crc kubenswrapper[5039]: E0130 13:05:49.093952 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:49 crc kubenswrapper[5039]: E0130 13:05:49.094231 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:50 crc kubenswrapper[5039]: I0130 13:05:50.093247 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:50 crc kubenswrapper[5039]: I0130 13:05:50.093340 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:50 crc kubenswrapper[5039]: E0130 13:05:50.093411 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:50 crc kubenswrapper[5039]: E0130 13:05:50.093591 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:51 crc kubenswrapper[5039]: I0130 13:05:51.092908 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:51 crc kubenswrapper[5039]: I0130 13:05:51.093060 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:51 crc kubenswrapper[5039]: E0130 13:05:51.093173 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:51 crc kubenswrapper[5039]: E0130 13:05:51.093288 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:52 crc kubenswrapper[5039]: I0130 13:05:52.092853 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:52 crc kubenswrapper[5039]: E0130 13:05:52.093081 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:52 crc kubenswrapper[5039]: I0130 13:05:52.093228 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:52 crc kubenswrapper[5039]: E0130 13:05:52.093344 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:53 crc kubenswrapper[5039]: I0130 13:05:53.092888 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:53 crc kubenswrapper[5039]: I0130 13:05:53.092973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:53 crc kubenswrapper[5039]: E0130 13:05:53.093065 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:53 crc kubenswrapper[5039]: E0130 13:05:53.093163 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.093391 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.093433 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.093608 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.093743 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.733827 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734608 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734700 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" exitCode=1 Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734756 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94"} Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734810 5039 scope.go:117] "RemoveContainer" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.735611 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.735891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rmqgh_openshift-multus(81e001d6-9163-47f7-b2b0-b21b2979b869)\"" pod="openshift-multus/multus-rmqgh" podUID="81e001d6-9163-47f7-b2b0-b21b2979b869" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.104208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.104373 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:55 crc kubenswrapper[5039]: E0130 13:05:55.104853 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:55 crc kubenswrapper[5039]: E0130 13:05:55.105057 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.741155 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.032996 5039 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 13:05:56 crc kubenswrapper[5039]: I0130 13:05:56.092677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:56 crc kubenswrapper[5039]: I0130 13:05:56.092802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.094027 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.094179 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.195215 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:57 crc kubenswrapper[5039]: E0130 13:05:57.093578 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.093858 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:57 crc kubenswrapper[5039]: E0130 13:05:57.094199 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.094402 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.757775 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.761287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.761669 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.804813 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podStartSLOduration=100.804793626 podStartE2EDuration="1m40.804793626s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:57.797532911 +0000 UTC m=+122.458214158" watchObservedRunningTime="2026-01-30 13:05:57.804793626 +0000 UTC m=+122.465474863" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.093097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.093167 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.093229 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.093349 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.192503 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.192613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.192711 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:59 crc kubenswrapper[5039]: I0130 13:05:59.092971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:59 crc kubenswrapper[5039]: E0130 13:05:59.093426 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093623 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093775 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093889 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093999 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:01 crc kubenswrapper[5039]: I0130 13:06:01.093478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:01 crc kubenswrapper[5039]: E0130 13:06:01.093671 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:01 crc kubenswrapper[5039]: E0130 13:06:01.197431 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092681 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092729 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092690 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.092894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.093076 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.093212 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:03 crc kubenswrapper[5039]: I0130 13:06:03.093366 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:03 crc kubenswrapper[5039]: E0130 13:06:03.093547 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093693 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.093759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.093886 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.094052 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:05 crc kubenswrapper[5039]: I0130 13:06:05.093362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:05 crc kubenswrapper[5039]: E0130 13:06:05.093579 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093413 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094189 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094387 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094472 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.198248 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:06:07 crc kubenswrapper[5039]: I0130 13:06:07.093338 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:07 crc kubenswrapper[5039]: E0130 13:06:07.093870 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093464 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093551 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093576 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093689 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093784 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093914 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.094275 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.801969 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.802533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5"} Jan 30 13:06:09 crc kubenswrapper[5039]: I0130 13:06:09.093380 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:09 crc kubenswrapper[5039]: E0130 13:06:09.093633 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093295 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093553 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093575 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093625 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093803 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:11 crc kubenswrapper[5039]: I0130 13:06:11.093146 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:11 crc kubenswrapper[5039]: E0130 13:06:11.093304 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092572 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092662 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.095765 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096483 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096591 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096634 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.093202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.097169 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.099110 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.255324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.298583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.299140 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304136 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304519 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304525 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304674 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304731 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304924 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304939 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304739 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305685 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305839 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306042 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305831 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306184 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306854 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.307766 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.307772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.308221 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.309815 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310833 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.313257 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.313706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.315399 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.315734 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.325797 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.326550 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.327059 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.345076 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.345822 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.348700 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.353835 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.354498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.354844 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355369 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355884 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358768 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358964 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359141 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359390 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359640 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359657 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360767 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360894 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360991 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.361276 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366065 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366677 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366885 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366984 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.367779 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368192 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368323 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368930 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.369166 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.369307 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.370091 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.370676 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.375845 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.380145 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.381839 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.381888 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382321 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382482 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382556 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383403 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383460 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383814 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.388093 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.388802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389387 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389651 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.390132 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.390276 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.393857 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394434 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394468 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394824 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.395282 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.398719 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.400466 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401118 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401644 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401952 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.402244 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.403070 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.404882 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.405299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.406936 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407075 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407164 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407227 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.408999 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.409388 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.409992 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.410022 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.425519 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.425971 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.426284 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.426441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432468 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432675 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432947 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433636 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433939 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434113 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434150 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434395 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434862 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.435245 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.439940 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440461 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440676 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440975 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441147 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441287 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442461 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442608 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442746 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442977 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.443132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.445501 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.449675 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.475825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476104 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476407 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476686 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.477423 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479511 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479904 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.483909 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.486418 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.491637 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.491985 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.492396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.515793 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.515902 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.517584 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.517904 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518368 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518536 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518594 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519064 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jplg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519414 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519528 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519559 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519600 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519630 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519665 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519680 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519695 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519727 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519955 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520036 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520067 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520082 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520101 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520118 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520267 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520294 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520317 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520337 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520445 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520595 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520618 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520639 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520700 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520746 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520766 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520786 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520790 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520958 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520979 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521034 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521078 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521147 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521291 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521329 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521336 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521351 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521373 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521403 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521424 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521332 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.522819 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523197 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523412 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523694 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.524716 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.524840 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.525196 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.531909 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.533439 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.534483 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.537238 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.543770 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.544097 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545149 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545828 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545879 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.546661 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.549881 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.551285 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.551823 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.552051 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.552698 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.556985 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.558265 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.559559 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.560881 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561083 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561501 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561862 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.565262 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.566562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.569139 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.570312 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.573962 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.581484 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.582669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.583150 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.583319 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.589075 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.590232 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.591237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.592271 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.593248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.595481 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.597297 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.598470 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.600619 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.602208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.603262 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.605419 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.605938 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m4hks"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.606073 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.606334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.607152 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.608543 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.611044 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.615416 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.617079 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.618176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.619301 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.620719 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.620970 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622774 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622867 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622893 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622909 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622947 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623010 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623045 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623330 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623431 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623533 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623963 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624062 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624165 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624121 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624432 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624473 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624610 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625135 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625191 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625214 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625293 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626258 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626269 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626815 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626999 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627117 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627208 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627255 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627283 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628086 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628126 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628215 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628249 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628267 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628347 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628532 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628611 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629163 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629335 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629427 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629397 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630195 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630662 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630749 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631236 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631345 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631399 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631580 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631757 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632033 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632389 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632547 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632603 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632628 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632660 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632923 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632991 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633050 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633072 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633094 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633164 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633206 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633264 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633315 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633358 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633410 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633433 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633597 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633610 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633661 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633696 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633711 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633730 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634925 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635074 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633571 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635518 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635570 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635772 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635848 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635909 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636057 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636366 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636649 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636718 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636894 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.637350 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.637584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638084 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638587 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638655 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639222 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639295 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639643 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640072 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640152 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640365 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.641273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.641897 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.644974 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.645650 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.646929 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.647826 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.648563 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.649476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.649560 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.651144 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.651878 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.653583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.654497 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.655167 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.672171 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.682411 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.702406 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.721803 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737232 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737340 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.738964 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739125 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739595 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739656 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739676 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739681 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739729 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739775 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739798 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739841 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740044 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740092 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740111 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740224 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740244 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740282 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740313 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740353 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740435 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740454 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740620 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740680 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740925 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741135 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741379 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741436 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741511 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741561 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741768 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741901 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742519 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742560 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742601 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742619 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742787 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742858 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742991 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.743153 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.743699 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.744125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.744584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.746425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.746585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.756310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.762175 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.781758 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.802260 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.806644 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.822256 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.828517 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.841150 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844199 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844262 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844296 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844373 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844426 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844535 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844559 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844605 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844629 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844671 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844785 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844758 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844832 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844904 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845040 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845166 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845165 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845189 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845281 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845305 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845358 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845383 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845468 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845540 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845610 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845637 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845716 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845746 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845790 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.861676 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.873880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.873929 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.881292 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.888780 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.889647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.892195 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.902154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.921480 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.929288 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.942183 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.962419 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.981102 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.988271 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.001903 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.002752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.014792 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.022685 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.041697 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.061911 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.066680 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.081704 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.084502 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.121745 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.141587 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.161607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.170776 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.181670 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.186584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.201660 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.220825 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.230548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.242219 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.251770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.262784 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.282049 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.289811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.302903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.323105 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.343275 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.345950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.361783 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.371215 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.382548 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.401715 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.422183 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.441966 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.462948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.470164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.482778 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.488808 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.502365 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.511598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.522077 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.542822 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.553222 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.560542 5039 request.go:700] Waited for 1.007620195s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.562585 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.570347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.582925 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.601061 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.621989 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.642695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.663346 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.682925 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.702430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.722408 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.730404 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.742312 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.747190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.761936 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.781721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.801231 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.813163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.822325 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.841916 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845342 5039 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845385 5039 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845412 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token podName:792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345389616 +0000 UTC m=+142.006070863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token") pod "machine-config-server-m4hks" (UID: "792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845442 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345422966 +0000 UTC m=+142.006104213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845452 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845478 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845491 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845514 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345495528 +0000 UTC m=+142.006176765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845527 5039 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845538 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345525659 +0000 UTC m=+142.006206906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845556 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345547019 +0000 UTC m=+142.006228256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config podName:6e099008-0b69-456c-a088-80d32053290b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.34556455 +0000 UTC m=+142.006245797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config") pod "openshift-controller-manager-operator-756b6f6bc6-nqtvv" (UID: "6e099008-0b69-456c-a088-80d32053290b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846103 5039 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846162 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs podName:792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346149004 +0000 UTC m=+142.006830241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs") pod "machine-config-server-m4hks" (UID: "792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846195 5039 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846228 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert podName:ded8dcf1-ff49-4b19-80b0-4702e95b94a3 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346217846 +0000 UTC m=+142.006899083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert") pod "ingress-canary-5s28q" (UID: "ded8dcf1-ff49-4b19-80b0-4702e95b94a3") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845355 5039 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846571 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346551084 +0000 UTC m=+142.007232321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.854707 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.861154 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.880929 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.901739 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.922063 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.940660 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.962143 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.982470 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.001997 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.022725 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.042814 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.062621 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.083360 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.102867 5039 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.122202 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.142919 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.162288 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.182977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.202064 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.222475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.242982 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.262442 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.298636 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.331138 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.342903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.355379 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.372381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373054 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373141 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373197 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373591 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373662 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.374618 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.376262 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.378889 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380139 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380772 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380858 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.382476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.383188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.384519 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.385753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.386677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.393087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.401996 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.417820 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.422514 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.465431 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.481123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.500730 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.521677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.528879 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.534202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.537727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.549861 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.563598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.577034 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.579808 5039 request.go:700] Waited for 1.944078096s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.596622 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.598325 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.621201 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.622599 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.642146 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.652254 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.659651 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.663590 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.684569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.685746 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.689553 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.694403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.705980 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.718184 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.735601 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.744523 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.763630 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.775258 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.775273 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.781332 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.782498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.791958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.792652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.802712 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.817178 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.818464 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.831923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerStarted","Data":"3e681b456647afe2d34de10f3608b1ac9a943d78d3dadd258eb17cf318629b2a"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.835495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"827c576a2f58dfcb589af97c2f3149ce155eb564dd8f788d034e560eb56cf9d0"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.836232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.838687 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"75c9df04a3cedffa8e596c84388ed90b3fd6665c0d997fef55d4f52a81dbb6b9"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.856964 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.876221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.896301 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.919234 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.920198 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.936430 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.944330 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:17 crc kubenswrapper[5039]: W0130 13:06:17.953894 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9716b1fb_f7e1_4fcc_87f5_3e75cb02804c.slice/crio-e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff WatchSource:0}: Error finding container e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff: Status 404 returned error can't find the container with id e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.954405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.974490 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.975528 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.000916 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.003618 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.007451 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.016962 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.017159 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.028329 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.038943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.049116 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.059236 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.079208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.079603 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.086142 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.086310 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c88fe5_db06_47c0_bc1f_d072071cb750.slice/crio-4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad WatchSource:0}: Error finding container 4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad: Status 404 returned error can't find the container with id 4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.096125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.100262 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.108481 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.116090 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.120642 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.124881 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.134252 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.139622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.144310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.157097 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.162625 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.177710 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.185193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.201922 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.216000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.216424 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.217398 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244414 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244495 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.249649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.268208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.279848 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.286383 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309835 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309959 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.314724 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.814694673 +0000 UTC m=+143.475375900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315829 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315922 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315957 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.316076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.318688 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.360002 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.392212 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.399365 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417133 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.417290 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.917262122 +0000 UTC m=+143.577943349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417569 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417590 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417632 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.420907 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421159 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421228 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421476 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421745 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421818 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.423392 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.423472 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.923452221 +0000 UTC m=+143.584133558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.424544 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.425316 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.432116 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.447081 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.452457 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42cf1d0f_3c54_41ad_a9a7_1b9bc1829c21.slice/crio-244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce WatchSource:0}: Error finding container 244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce: Status 404 returned error can't find the container with id 244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.462277 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.476040 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.479426 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.494678 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.495065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523212 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523310 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523387 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.523502 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.023485379 +0000 UTC m=+143.684166606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523536 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523601 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.524296 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.524435 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.024420391 +0000 UTC m=+143.685101618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.528744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.529384 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc6c0c56_d942_4a79_9f24_6e649e17c3f4.slice/crio-fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443 WatchSource:0}: Error finding container fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443: Status 404 returned error can't find the container with id fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.580462 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.614671 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.626299 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.627155 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.127130534 +0000 UTC m=+143.787811771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.647619 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod501d1ad0_71ea_4bef_8c89_8a68f523e6ec.slice/crio-0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8 WatchSource:0}: Error finding container 0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8: Status 404 returned error can't find the container with id 0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.664022 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.670892 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.725588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.730337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.730804 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.230789989 +0000 UTC m=+143.891471216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.790168 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbf2594_31f8_4172_85ba_4a63a6d18fa6.slice/crio-537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980 WatchSource:0}: Error finding container 537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980: Status 404 returned error can't find the container with id 537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.832267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.832385 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.332367334 +0000 UTC m=+143.993048561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.832602 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.832871 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.332840656 +0000 UTC m=+143.993521883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.850063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerStarted","Data":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.853602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"1cf2132a7a4a72c7b2218a7dd4ae9b53c51b9b43c91f8d9c0854278a8e9d0172"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.854557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" event={"ID":"2b152375-2709-4538-b651-e8535098af13","Type":"ContainerStarted","Data":"c3c36a9b396afb63750aba582890799b9dc6e0e313d537a42b5fc3a0576c5970"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.855790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerStarted","Data":"e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.858265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"dd0fa0448f12b88bfbb0bf81abf51e6250f7e852ddd8218cfc00883c23da86eb"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.858793 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" event={"ID":"b400290b-0dae-4e47-a15f-f3ae97648175","Type":"ContainerStarted","Data":"da9d9230ea5c6083ad726bce95755ee628e65e0261bb29ce104e2d98d74c6cdd"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.860316 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" event={"ID":"47c88fe5-db06-47c0-bc1f-d072071cb750","Type":"ContainerStarted","Data":"4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.861424 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.866028 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.867297 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.868291 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.868856 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"3b320b35acacb21f210677c955a5ad28b78142a7b7bb4f4a3cb7752daedecb96"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.869465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerStarted","Data":"5ae3a3f992a5031038936971e01c62479bfa03c1757ad1f31db87b69ba304bdb"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.870209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" event={"ID":"0ace130b-bc4e-4654-8e0b-53722f8df757","Type":"ContainerStarted","Data":"48d5bfc4bb5d9f0fc7d4c95f1376a08783ff873633c672b5905cfd710336449a"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.871970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jplg4" event={"ID":"1fbf2594-31f8-4172-85ba-4a63a6d18fa6","Type":"ContainerStarted","Data":"537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.872929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"702ca2de8bb0e3a52f42197daf6110f56b4c0eccf1046bfca51fa69463e91831"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.878228 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerStarted","Data":"c1989ba7ea2f4b8b7a01d3ddedfb906d00ef966d8777591dbcf3cc6d99cf44c4"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.878834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddw7q" event={"ID":"af4a4ae0-0967-4331-971c-d7e44b45a031","Type":"ContainerStarted","Data":"24715762605c8c9db57cb512e3bef05c31a883200a4c710cc1abfe726afadbbe"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.879347 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" event={"ID":"e1d2b6d3-73a5-4764-bc4c-5688662d85da","Type":"ContainerStarted","Data":"29c5087b72595bf50178f78001d4277939a2fba1dc0e609edac41d76a8695eab"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.927856 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.933930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.934153 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.434123574 +0000 UTC m=+144.094804801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.934336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.934690 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.434682967 +0000 UTC m=+144.095364194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.960602 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.036652 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.037033 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.53700324 +0000 UTC m=+144.197684467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.045571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.047856 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.049963 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.057595 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.138579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.139106 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.639085637 +0000 UTC m=+144.299766944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.143596 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e6ebb_d4e8_4bbc_ac4e_98ba0128aa1d.slice/crio-709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c WatchSource:0}: Error finding container 709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c: Status 404 returned error can't find the container with id 709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.230083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.240310 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.240572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.74054805 +0000 UTC m=+144.401229297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.240738 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.241145 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.741134384 +0000 UTC m=+144.401815621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.243570 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.344610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.344848 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.84482139 +0000 UTC m=+144.505502617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.344928 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.345273 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.84526451 +0000 UTC m=+144.505945737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.374380 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.446155 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.447053 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.94703424 +0000 UTC m=+144.607715467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.468579 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2655fb3_6427_447d_8b61_4d998e133f50.slice/crio-5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c WatchSource:0}: Error finding container 5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c: Status 404 returned error can't find the container with id 5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.469387 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.549958 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.550299 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.050284095 +0000 UTC m=+144.710965322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.579845 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb67c1f74_8845_4dbd_9e2b_df446569a88a.slice/crio-a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef WatchSource:0}: Error finding container a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef: Status 404 returned error can't find the container with id a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.651323 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.651497 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.151473401 +0000 UTC m=+144.812154628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.651751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.652260 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.15224896 +0000 UTC m=+144.812930187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.687843 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.749863 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.753877 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.754018 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.253968848 +0000 UTC m=+144.914650075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.754314 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.754619 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.254610114 +0000 UTC m=+144.915291341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.756921 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.823175 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.855221 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.855619 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.355590375 +0000 UTC m=+145.016271602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.855797 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.856100 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.356092507 +0000 UTC m=+145.016773734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.886976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerStarted","Data":"d60fc3b8d8ed24515335919a12303771c5bf7a63a5e1dd33ab85006cd1be0e0c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.888096 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m4hks" event={"ID":"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4","Type":"ContainerStarted","Data":"feffe55d9d93d47e69a17495eac7d084bc44a0039d8f73777ac6465396086136"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.889547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerStarted","Data":"e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.890639 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" event={"ID":"438eca87-c8a4-401b-8ea4-ff982404ea2d","Type":"ContainerStarted","Data":"e55c46a1048c8ecee7fa3e55b2dd6bac4687b7cdde13027f317bd16f38ebbf35"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.891851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" event={"ID":"47c88fe5-db06-47c0-bc1f-d072071cb750","Type":"ContainerStarted","Data":"407e6d9f441f53411068cda938bfc0a2636d3a3e96a01e80efdc61267b19c060"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.893403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"63e7c835849d558759aef92008693949d9a0b39b1238833bef4862381dd30e67"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.894927 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerStarted","Data":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.897900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" event={"ID":"b400290b-0dae-4e47-a15f-f3ae97648175","Type":"ContainerStarted","Data":"c658273dee9543e154a6aa5fb0afb633dbe19c4bd9e2a97ee95bdee63f91ae21"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.899169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" event={"ID":"e1d2b6d3-73a5-4764-bc4c-5688662d85da","Type":"ContainerStarted","Data":"d8d9397c48266f7ef0adf50ee20d0a2666b46637a0dc23714e5536293d910fc7"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.900254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"25415cd1c75eec4a291354662b459478119507508c5d58106ca3197f3e6602d3"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.901333 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" event={"ID":"7bdbdc1f-b957-4eef-a61d-692ed8717de1","Type":"ContainerStarted","Data":"1cea3a4b12fbbfa9c5f422c1f4587b859f141bccb5970994cf1b4711b027bc98"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.905265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddw7q" event={"ID":"af4a4ae0-0967-4331-971c-d7e44b45a031","Type":"ContainerStarted","Data":"19b823e2d11cb262e7d94571a2b46c8aa31ef34aac2c4ec74a3e805f1ad4107e"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.906044 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.909436 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.909484 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.910827 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" event={"ID":"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d","Type":"ContainerStarted","Data":"709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.913523 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" event={"ID":"ffc75429-dba3-4b41-99d1-39c5b5334c0e","Type":"ContainerStarted","Data":"8ef2044db720538d41fed2d9a32eb838ced2ca58a22180bdd266c38a78c013e7"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.916547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5s28q" event={"ID":"ded8dcf1-ff49-4b19-80b0-4702e95b94a3","Type":"ContainerStarted","Data":"1bc72428d2ea6399acfc56e096a6d073a08490b87c1b82dd169d9fde612b627c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.918189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"2f4389f132a9653cfcf93661ee801ccd692e6b789d777c3e65cb74899e0071bf"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.918921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"57f4cb6180510c6415376359e31099e26c95df736977c68e8c39cf116ee462e3"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.919472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.920136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" event={"ID":"ae6119e4-926e-4118-a675-e37898d995f6","Type":"ContainerStarted","Data":"432147953aeb5dc878ef562fc18aadaf03d21d0ac444c5faa887295843d48a36"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.920925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" event={"ID":"d2655fb3-6427-447d-8b61-4d998e133f50","Type":"ContainerStarted","Data":"5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.921618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" event={"ID":"e9396757-c308-44b4-82a9-bd488f0841a9","Type":"ContainerStarted","Data":"12e22447c4af77e14f19d4ac377db05813c93aed260d6571820e47a8c9d60bcb"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.922601 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" event={"ID":"0ace130b-bc4e-4654-8e0b-53722f8df757","Type":"ContainerStarted","Data":"36db64f5a90f89acb31c350a4199598100ae666865aeb5bc401781f0315e6a96"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.923611 5039 generic.go:334] "Generic (PLEG): container finished" podID="e99acbdd-15f8-43ef-a7fa-70a8f4f8674c" containerID="00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268" exitCode=0 Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.923726 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerDied","Data":"00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.956917 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.957004 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.456987985 +0000 UTC m=+145.117669212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.957110 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.957411 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.457402795 +0000 UTC m=+145.118084022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.985990 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.989792 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.991257 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.059906 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ddw7q" podStartSLOduration=124.059874022 podStartE2EDuration="2m4.059874022s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.055137908 +0000 UTC m=+144.715819145" watchObservedRunningTime="2026-01-30 13:06:20.059874022 +0000 UTC m=+144.720555319" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.061081 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.062689 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.562664019 +0000 UTC m=+145.223345276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.137961 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fb7c91_edd2_4a41_9f64_9c19d1fabd2f.slice/crio-6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02 WatchSource:0}: Error finding container 6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02: Status 404 returned error can't find the container with id 6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02 Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.139997 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920b1dd0_97f0_4bc2_a9ca_b518c314c29b.slice/crio-47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46 WatchSource:0}: Error finding container 47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46: Status 404 returned error can't find the container with id 47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46 Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.140279 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e099008_0b69_456c_a088_80d32053290b.slice/crio-70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d WatchSource:0}: Error finding container 70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d: Status 404 returned error can't find the container with id 70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.142537 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa061666_64af_4cf4_aeb5_73faa25d1c22.slice/crio-3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744 WatchSource:0}: Error finding container 3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744: Status 404 returned error can't find the container with id 3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744 Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.160576 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.162907 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.164286 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.664255135 +0000 UTC m=+145.324936362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.265429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.265882 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.765863581 +0000 UTC m=+145.426544808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.331290 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2cmnb" podStartSLOduration=124.331272115 podStartE2EDuration="2m4.331272115s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.296441617 +0000 UTC m=+144.957122874" watchObservedRunningTime="2026-01-30 13:06:20.331272115 +0000 UTC m=+144.991953342" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.367239 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.367654 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.86763321 +0000 UTC m=+145.528314457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.468381 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.469061 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.969042901 +0000 UTC m=+145.629724138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.570323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.570631 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.070616026 +0000 UTC m=+145.731297253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.671768 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.672344 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.172319625 +0000 UTC m=+145.833000862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.739288 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b2c52b1_952b_4c00_b9f3_29cc5957a53d.slice/crio-9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a WatchSource:0}: Error finding container 9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a: Status 404 returned error can't find the container with id 9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.774834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.775572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.27555537 +0000 UTC m=+145.936236597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.876814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.876829 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.376807127 +0000 UTC m=+146.037488354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.877130 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.877477 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.377469823 +0000 UTC m=+146.038151050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.928831 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.934565 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" event={"ID":"920b1dd0-97f0-4bc2-a9ca-b518c314c29b","Type":"ContainerStarted","Data":"47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.938019 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" event={"ID":"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f","Type":"ContainerStarted","Data":"6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.941191 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"3c872739d36d535361ebf5b21741102e8102dedf3c30d182bb258f57425d1967"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.942677 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" event={"ID":"a391a542-f6cf-4b97-b69b-aa27a4942896","Type":"ContainerStarted","Data":"7f011b5c991c8a16dd4e282407fa98dfcab1c27683a8c8b14d19c021ecfb276f"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.957582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.962757 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" event={"ID":"6e099008-0b69-456c-a088-80d32053290b","Type":"ContainerStarted","Data":"70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969139 5039 generic.go:334] "Generic (PLEG): container finished" podID="f117b241-1e37-4603-bb50-aad0ee886758" containerID="0c2f34b879c86052fff25f66aa67a7b37ceb98b412a37a3f5cf7f9fb868c1083" exitCode=0 Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerDied","Data":"0c2f34b879c86052fff25f66aa67a7b37ceb98b412a37a3f5cf7f9fb868c1083"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969688 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969931 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969972 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.976231 5039 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fmcqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.976279 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.978865 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.979886 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.479869218 +0000 UTC m=+146.140550445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.986433 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" podStartSLOduration=123.986416745 podStartE2EDuration="2m3.986416745s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.984186972 +0000 UTC m=+145.644868209" watchObservedRunningTime="2026-01-30 13:06:20.986416745 +0000 UTC m=+145.647097972" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.000410 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podStartSLOduration=125.000389082 podStartE2EDuration="2m5.000389082s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.999982622 +0000 UTC m=+145.660663859" watchObservedRunningTime="2026-01-30 13:06:21.000389082 +0000 UTC m=+145.661070329" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.081076 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.081961 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.581942455 +0000 UTC m=+146.242623822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.182484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.182916 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.682896205 +0000 UTC m=+146.343577432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.284203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.284686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.784671695 +0000 UTC m=+146.445352922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.385474 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.385654 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.885628115 +0000 UTC m=+146.546309342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.385939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.386266 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.88625824 +0000 UTC m=+146.546939467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.486806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.487066 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.986994614 +0000 UTC m=+146.647675851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.487282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.487638 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.987623239 +0000 UTC m=+146.648304466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.588287 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.588459 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.088431456 +0000 UTC m=+146.749112693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.588723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.589048 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.08903423 +0000 UTC m=+146.749715467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.689350 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.690104 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.18998504 +0000 UTC m=+146.850666267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.792042 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.792481 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.292460057 +0000 UTC m=+146.953141354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.893080 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.893247 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.393219532 +0000 UTC m=+147.053900759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.893506 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.893893 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.393878708 +0000 UTC m=+147.054559935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.980777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"e674a304625b0e3c084f2e14a8a606b2b5cd3297e89bf3863f93d9214f4c11ef"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.983381 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"4bc61d03889fd8f1e4c67ac3d99b9b4017c7d606bb513c27aa74eebb144f7705"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.985089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"9f19c05f78e9f4792f69ee1067515bd10b6483d91d1109f2d1330daadc3fbd51"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.986864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"dfe1fff177825164a66db5c4d7c26319474250342e6f2085b00664eb20fa7ee1"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.988219 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" event={"ID":"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d","Type":"ContainerStarted","Data":"577f2e6aada35e5c5d2500169b399510bb0263c756d4ff3b80732e0bc5a87f8f"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.989951 5039 generic.go:334] "Generic (PLEG): container finished" podID="56c21f31-0db8-4876-9198-ecf1453378eb" containerID="754a693e5e2ab4068f046d3105ddf30f94a5a84a3e51217f7d69a2810c3dae6b" exitCode=0 Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.990184 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerDied","Data":"754a693e5e2ab4068f046d3105ddf30f94a5a84a3e51217f7d69a2810c3dae6b"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.991842 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"644f24a47acdc6c5eacd730737c705db9b877d0e11f58c3923ef234045fe58c8"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.993551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"9aafdec3b01727727b0baa9b229932937d4c183801c63ea97f1dbf70347d2e2f"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.994533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.994734 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.494704625 +0000 UTC m=+147.155385892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.994817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.995374 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.495353801 +0000 UTC m=+147.156035128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.995727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" event={"ID":"2b152375-2709-4538-b651-e8535098af13","Type":"ContainerStarted","Data":"c2459580cf6b24198f6091957efb2a7e043744d07d97ed7940b251d46bb3de33"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997086 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997674 5039 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fmcqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997727 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998045 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998124 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998323 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.000895 5039 patch_prober.go:28] interesting pod/console-operator-58897d9998-jt5jk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.000966 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podUID="0ace130b-bc4e-4654-8e0b-53722f8df757" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.030886 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podStartSLOduration=126.030864466 podStartE2EDuration="2m6.030864466s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.029324909 +0000 UTC m=+146.690006146" watchObservedRunningTime="2026-01-30 13:06:22.030864466 +0000 UTC m=+146.691545693" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.044681 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" podStartSLOduration=126.044661018 podStartE2EDuration="2m6.044661018s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.042787583 +0000 UTC m=+146.703468840" watchObservedRunningTime="2026-01-30 13:06:22.044661018 +0000 UTC m=+146.705342245" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.066847 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" podStartSLOduration=126.066826461 podStartE2EDuration="2m6.066826461s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.063344998 +0000 UTC m=+146.724026245" watchObservedRunningTime="2026-01-30 13:06:22.066826461 +0000 UTC m=+146.727507718" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.096495 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.096623 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.596603018 +0000 UTC m=+147.257284245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.096838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.097256 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.597243154 +0000 UTC m=+147.257924381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.198078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.198173 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.698149603 +0000 UTC m=+147.358830830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.198766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.199197 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.699189128 +0000 UTC m=+147.359870355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.300439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.300650 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.800623579 +0000 UTC m=+147.461304806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.300780 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.301223 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.801211963 +0000 UTC m=+147.461893180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.402161 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.402857 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.90284056 +0000 UTC m=+147.563521787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.504103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.504460 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.004445935 +0000 UTC m=+147.665127162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.604886 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.605173 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.105144049 +0000 UTC m=+147.765825287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.605434 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.605768 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.105755294 +0000 UTC m=+147.766436521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.706871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.707090 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.207067483 +0000 UTC m=+147.867748720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.707314 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.707712 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.207697888 +0000 UTC m=+147.868379115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.808815 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.809038 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.308995126 +0000 UTC m=+147.969676353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.809207 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.809598 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.309583831 +0000 UTC m=+147.970265058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.910508 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.910622 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.410605082 +0000 UTC m=+148.071286299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.910819 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.911082 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.411073464 +0000 UTC m=+148.071754691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.002955 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"b6a448e2eef08e22cc54ae018b3875beb041b1b26952c83b783d6a157a3c306a"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.004328 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" event={"ID":"a391a542-f6cf-4b97-b69b-aa27a4942896","Type":"ContainerStarted","Data":"87c8592ae156170285681f78d5b8cdd4f4ec18dd375b95cf2175618f0f463c5b"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.005459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"bec2701f96ae7c7ee124e29c3d64c0aee8d828b7d61abd35a7e34aec23682e33"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.006276 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" event={"ID":"ae6119e4-926e-4118-a675-e37898d995f6","Type":"ContainerStarted","Data":"17b003181cbf820c44c0d0f9cb69950b7096902660472d0e97175d8a465588fa"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.007480 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerStarted","Data":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.007590 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009303 5039 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kmjcv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009343 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009824 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerStarted","Data":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.010954 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" event={"ID":"920b1dd0-97f0-4bc2-a9ca-b518c314c29b","Type":"ContainerStarted","Data":"952beb5971438f8fc27ac633bce466ba8294be57284d04d41e43e2dbc720307b"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.011283 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.011688 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.511670735 +0000 UTC m=+148.172351962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.012166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerStarted","Data":"a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.013187 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" event={"ID":"ffc75429-dba3-4b41-99d1-39c5b5334c0e","Type":"ContainerStarted","Data":"281267bb38856215e5cf7d910d307eeeb3868303e89a3beaa87ee2864af63495"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.013917 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" event={"ID":"7bdbdc1f-b957-4eef-a61d-692ed8717de1","Type":"ContainerStarted","Data":"717f1d93bcde06d99afdbe830f0b0a6e169a09152aa16fa25f6f452db7502e7c"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.015071 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" event={"ID":"438eca87-c8a4-401b-8ea4-ff982404ea2d","Type":"ContainerStarted","Data":"99ae93924f33560376e4a9814b6369108cbbcddb25a8196189a633fd3a24c498"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.016124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" event={"ID":"e9396757-c308-44b4-82a9-bd488f0841a9","Type":"ContainerStarted","Data":"29819df6e8c89cd19b3e3b5a58cf44739a4355b5c14073ddba28bf68f4d51fb3"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.017495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"4784b41314da87727cde7980187cf52f09ef8edbe9cf58c418e7185442154bee"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.018610 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5s28q" event={"ID":"ded8dcf1-ff49-4b19-80b0-4702e95b94a3","Type":"ContainerStarted","Data":"d4eef7f318e9a959b9a584176a885c4d40b900746581a426fcc02293f7f2cdca"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.019658 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" event={"ID":"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f","Type":"ContainerStarted","Data":"d1f3f6fcf312abb7fe5419fe96d5f58c05d3b454d1850380c16764f499460485"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.022425 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jplg4" event={"ID":"1fbf2594-31f8-4172-85ba-4a63a6d18fa6","Type":"ContainerStarted","Data":"51e34de8aa94e0ab8427d7d786fb9df827536da5f8d920da48673141bfadb161"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.024851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" event={"ID":"d2655fb3-6427-447d-8b61-4d998e133f50","Type":"ContainerStarted","Data":"8fb7e3a41a764d32ccf176131ffc2b3da734a0ddee6ddabc1643ae7931cce0b5"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.026985 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" podStartSLOduration=126.026973353 podStartE2EDuration="2m6.026973353s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.021584434 +0000 UTC m=+147.682265661" watchObservedRunningTime="2026-01-30 13:06:23.026973353 +0000 UTC m=+147.687654580" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.032042 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"3c5d2ec96f198cb2b00a6118fe9c579bffe6220dba85f2222271d955f9fa835d"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.054419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"9a80ba461fb651c9fb3266eb02517bb78ebda4cf6b2325d3450b7c76dda1e29d"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.055892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" event={"ID":"6e099008-0b69-456c-a088-80d32053290b","Type":"ContainerStarted","Data":"8d53e4601abc560877ce21e53fa41a17193a24040059079a80e13341122b4de6"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.057164 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"8e404b8bce7cca39a8fd402842aac1488795d82f7569611ddcfe624fbc392a11"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.059512 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m4hks" event={"ID":"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4","Type":"ContainerStarted","Data":"e57a6cc83a221c07d57d418c102763587a9850151fa5a77491fa1dc14f0a6f24"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062478 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062615 5039 patch_prober.go:28] interesting pod/console-operator-58897d9998-jt5jk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062644 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podUID="0ace130b-bc4e-4654-8e0b-53722f8df757" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068149 5039 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gp9qj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068194 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068276 5039 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6x6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068289 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podUID="2b152375-2709-4538-b651-e8535098af13" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.075031 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podStartSLOduration=126.0750041 podStartE2EDuration="2m6.0750041s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.073997005 +0000 UTC m=+147.734678242" watchObservedRunningTime="2026-01-30 13:06:23.0750041 +0000 UTC m=+147.735685337" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.075530 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" podStartSLOduration=126.075526062 podStartE2EDuration="2m6.075526062s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.039583697 +0000 UTC m=+147.700264954" watchObservedRunningTime="2026-01-30 13:06:23.075526062 +0000 UTC m=+147.736207289" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.097638 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" podStartSLOduration=126.097619244 podStartE2EDuration="2m6.097619244s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.093865824 +0000 UTC m=+147.754547071" watchObservedRunningTime="2026-01-30 13:06:23.097619244 +0000 UTC m=+147.758300491" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.114429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.122281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.622266397 +0000 UTC m=+148.282947624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.131948 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" podStartSLOduration=127.13192869 podStartE2EDuration="2m7.13192869s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.124146003 +0000 UTC m=+147.784827250" watchObservedRunningTime="2026-01-30 13:06:23.13192869 +0000 UTC m=+147.792609927" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.176218 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podStartSLOduration=126.176197466 podStartE2EDuration="2m6.176197466s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.173968242 +0000 UTC m=+147.834649469" watchObservedRunningTime="2026-01-30 13:06:23.176197466 +0000 UTC m=+147.836878693" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.222257 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.222504 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.72247975 +0000 UTC m=+148.383160977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.222805 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.224578 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.72456645 +0000 UTC m=+148.385247677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.230202 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" podStartSLOduration=126.230189645 podStartE2EDuration="2m6.230189645s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.230140704 +0000 UTC m=+147.890821931" watchObservedRunningTime="2026-01-30 13:06:23.230189645 +0000 UTC m=+147.890870872" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.230519 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podStartSLOduration=126.230515293 podStartE2EDuration="2m6.230515293s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.207572151 +0000 UTC m=+147.868253378" watchObservedRunningTime="2026-01-30 13:06:23.230515293 +0000 UTC m=+147.891196520" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.324808 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.325329 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.825313515 +0000 UTC m=+148.485994742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.426197 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.426626 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.926606993 +0000 UTC m=+148.587288300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.527585 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.528072 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.028053575 +0000 UTC m=+148.688734812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.629544 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.629937 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.129921347 +0000 UTC m=+148.790602574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.731174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.731315 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.231287217 +0000 UTC m=+148.891968454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.731704 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.732108 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.232096487 +0000 UTC m=+148.892777724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.833437 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.833636 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.333614541 +0000 UTC m=+148.994295778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.834083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.834644 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.334633475 +0000 UTC m=+148.995314702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.935758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.935975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936043 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.936327 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.436307163 +0000 UTC m=+149.096988390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.942160 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.942307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.948590 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.949405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.037708 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.038050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.538037741 +0000 UTC m=+149.198718968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.074458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"8ae2158c1a037637af5894b32fcd831ed0f974b5b9961d790851c9f7ccad980a"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.075681 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.078059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"2aed1563993fc998476641dcb96c8917eb10e0cbb4612409b5a46ddbb977a62c"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.088624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"5634fb69ec9f2a030353ebc6c2542cc86d869ae0d11097d037ac9230cbe75691"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.092126 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerStarted","Data":"48fe0eeb742d0fd4ba6d9addee373ecb1f8daeb5904c6ee6724302abc931d8d4"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.092730 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"d7b15bc87be9d439cbdd8c4a46ea83572c334df7aa6f0f138097b29b04ae30ca"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"654f7d2336b9bce5b84c281eeeccb8b4b416a75d7c9fb7bfb656bd67ca085f22"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"601ec8a1b76aaa97b1a1bbdb945fdeaba88ce859d102411da3e6b4e196edeac1"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111288 5039 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj57h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111502 5039 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kmjcv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111768 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111565 5039 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gp9qj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111804 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111617 5039 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6x6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111832 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podUID="2b152375-2709-4538-b651-e8535098af13" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111726 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.119206 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.126052 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.127524 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.127590 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.136961 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" podStartSLOduration=127.136938042 podStartE2EDuration="2m7.136938042s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.102202656 +0000 UTC m=+148.762883893" watchObservedRunningTime="2026-01-30 13:06:24.136938042 +0000 UTC m=+148.797619269" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.138670 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.139057 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.139428 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.639412492 +0000 UTC m=+149.300093719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.172354 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" podStartSLOduration=127.172334604 podStartE2EDuration="2m7.172334604s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.169183988 +0000 UTC m=+148.829865215" watchObservedRunningTime="2026-01-30 13:06:24.172334604 +0000 UTC m=+148.833015831" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.173947 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podStartSLOduration=128.173939673 podStartE2EDuration="2m8.173939673s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.139545065 +0000 UTC m=+148.800226312" watchObservedRunningTime="2026-01-30 13:06:24.173939673 +0000 UTC m=+148.834620900" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.220144 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.256837 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.272645 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" podStartSLOduration=127.272630508 podStartE2EDuration="2m7.272630508s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.270375564 +0000 UTC m=+148.931056791" watchObservedRunningTime="2026-01-30 13:06:24.272630508 +0000 UTC m=+148.933311735" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.285084 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.784995426 +0000 UTC m=+149.445676653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.286930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" podStartSLOduration=127.286890902 podStartE2EDuration="2m7.286890902s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.231029337 +0000 UTC m=+148.891710554" watchObservedRunningTime="2026-01-30 13:06:24.286890902 +0000 UTC m=+148.947572139" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.324380 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" podStartSLOduration=127.324354723 podStartE2EDuration="2m7.324354723s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.301242787 +0000 UTC m=+148.961924034" watchObservedRunningTime="2026-01-30 13:06:24.324354723 +0000 UTC m=+148.985035960" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.334930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" podStartSLOduration=127.334912738 podStartE2EDuration="2m7.334912738s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.320343427 +0000 UTC m=+148.981024654" watchObservedRunningTime="2026-01-30 13:06:24.334912738 +0000 UTC m=+148.995593975" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.358983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.359600 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.859578961 +0000 UTC m=+149.520260188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.362582 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m4hks" podStartSLOduration=9.362560943 podStartE2EDuration="9.362560943s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.334639271 +0000 UTC m=+148.995320518" watchObservedRunningTime="2026-01-30 13:06:24.362560943 +0000 UTC m=+149.023242170" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.375066 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" podStartSLOduration=128.375048794 podStartE2EDuration="2m8.375048794s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.367695387 +0000 UTC m=+149.028376634" watchObservedRunningTime="2026-01-30 13:06:24.375048794 +0000 UTC m=+149.035730021" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.403025 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jplg4" podStartSLOduration=127.402993166 podStartE2EDuration="2m7.402993166s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.394526073 +0000 UTC m=+149.055207300" watchObservedRunningTime="2026-01-30 13:06:24.402993166 +0000 UTC m=+149.063674393" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.431469 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" podStartSLOduration=127.431451051 podStartE2EDuration="2m7.431451051s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.429153156 +0000 UTC m=+149.089834383" watchObservedRunningTime="2026-01-30 13:06:24.431451051 +0000 UTC m=+149.092132278" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.461664 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.462050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.962035348 +0000 UTC m=+149.622716575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.463940 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" podStartSLOduration=127.463927273 podStartE2EDuration="2m7.463927273s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.462355585 +0000 UTC m=+149.123036822" watchObservedRunningTime="2026-01-30 13:06:24.463927273 +0000 UTC m=+149.124608510" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.481871 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" podStartSLOduration=127.481851245 podStartE2EDuration="2m7.481851245s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.473964025 +0000 UTC m=+149.134645262" watchObservedRunningTime="2026-01-30 13:06:24.481851245 +0000 UTC m=+149.142532472" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.493881 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" podStartSLOduration=128.493864844 podStartE2EDuration="2m8.493864844s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.489820876 +0000 UTC m=+149.150502113" watchObservedRunningTime="2026-01-30 13:06:24.493864844 +0000 UTC m=+149.154546071" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.522846 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" podStartSLOduration=127.522826971 podStartE2EDuration="2m7.522826971s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.5211329 +0000 UTC m=+149.181814137" watchObservedRunningTime="2026-01-30 13:06:24.522826971 +0000 UTC m=+149.183508198" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.563189 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.563523 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.0635101 +0000 UTC m=+149.724191327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.569391 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5s28q" podStartSLOduration=9.569380382 podStartE2EDuration="9.569380382s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.546421479 +0000 UTC m=+149.207102716" watchObservedRunningTime="2026-01-30 13:06:24.569380382 +0000 UTC m=+149.230061609" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.571513 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podStartSLOduration=127.571507313 podStartE2EDuration="2m7.571507313s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.56890622 +0000 UTC m=+149.229587457" watchObservedRunningTime="2026-01-30 13:06:24.571507313 +0000 UTC m=+149.232188550" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.643569 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" podStartSLOduration=127.643553527 podStartE2EDuration="2m7.643553527s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.641853436 +0000 UTC m=+149.302534663" watchObservedRunningTime="2026-01-30 13:06:24.643553527 +0000 UTC m=+149.304234754" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.650402 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" podStartSLOduration=127.650381041 podStartE2EDuration="2m7.650381041s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.591777621 +0000 UTC m=+149.252458848" watchObservedRunningTime="2026-01-30 13:06:24.650381041 +0000 UTC m=+149.311062268" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.670486 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" podStartSLOduration=127.670470365 podStartE2EDuration="2m7.670470365s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.669361838 +0000 UTC m=+149.330043065" watchObservedRunningTime="2026-01-30 13:06:24.670470365 +0000 UTC m=+149.331151592" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.673750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.674070 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.174058641 +0000 UTC m=+149.834739868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.698581 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" podStartSLOduration=127.698564711 podStartE2EDuration="2m7.698564711s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.697356282 +0000 UTC m=+149.358037509" watchObservedRunningTime="2026-01-30 13:06:24.698564711 +0000 UTC m=+149.359245938" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.777573 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.777922 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.277907031 +0000 UTC m=+149.938588258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.881805 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.882452 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.382441097 +0000 UTC m=+150.043122324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.982662 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.982824 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.482790143 +0000 UTC m=+150.143471370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.982879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.983187 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.483176832 +0000 UTC m=+150.143858059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.083580 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.083686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.58366369 +0000 UTC m=+150.244344917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.083793 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.084100 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.58408812 +0000 UTC m=+150.244769347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.114180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"2443d377d6710ac6a88187186e321e5c9599b7f85f0f956767d8aeb48772b2d5"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.114939 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.115688 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"166a20645e45d844714490f71f3cc3430cce2667ac7850dca2bdd5e2fe1a05cc"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.116487 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"37e1f387f67d28dcc8902d1e252c631a4fd654c1627a6023c5b08965315dcf59"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.126992 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"92dfeb86a2d8678324e58004016fa321b5462537570db90bab0002e4d7f9f9f6"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.129726 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lgzmc" podStartSLOduration=10.129717769 podStartE2EDuration="10.129717769s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.126789188 +0000 UTC m=+149.787470425" watchObservedRunningTime="2026-01-30 13:06:25.129717769 +0000 UTC m=+149.790398996" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.131322 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"d52f28e8560715d4c30268c1d5843cc27ffca15e8cf35bba5bc7939636bd2d4b"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.133111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bd6de094193abcc1bc09d0720b5e84dd4d24f65e9bd91e470b5f6ddb059e1f06"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.135633 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"2a18435c4d7d70aac440d2c5187215ca00e253baeddaf459892ce1ad8d5b16ee"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.137309 5039 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj57h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.137338 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.147630 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" podStartSLOduration=128.1476206 podStartE2EDuration="2m8.1476206s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.144954645 +0000 UTC m=+149.805635882" watchObservedRunningTime="2026-01-30 13:06:25.1476206 +0000 UTC m=+149.808301827" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.185078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.185367 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.685353738 +0000 UTC m=+150.346034965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.286559 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.287678 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.787662731 +0000 UTC m=+150.448343958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.300798 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:25 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:25 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:25 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.300862 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.375499 5039 csr.go:261] certificate signing request csr-lqvdn is approved, waiting to be issued Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388176 5039 csr.go:257] certificate signing request csr-lqvdn is issued Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388300 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.388612 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.88859681 +0000 UTC m=+150.549278027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388747 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.389005 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.88899806 +0000 UTC m=+150.549679287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.490347 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.490452 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.990434752 +0000 UTC m=+150.651115979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.490676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.490916 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.990909433 +0000 UTC m=+150.651590660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.592175 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.592329 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.092313434 +0000 UTC m=+150.752994661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.592735 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.593061 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.093053232 +0000 UTC m=+150.753734459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.694251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.694439 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.194403561 +0000 UTC m=+150.855084788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.694572 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.694858 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.194845342 +0000 UTC m=+150.855526569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.795241 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.795614 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.295598957 +0000 UTC m=+150.956280174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.896321 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.896710 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.396694321 +0000 UTC m=+151.057375548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.996992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.997114 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.497090277 +0000 UTC m=+151.157771504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.997211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.997531 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.497520548 +0000 UTC m=+151.158201775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.097659 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.097747 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.59773619 +0000 UTC m=+151.258417407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.098094 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.098387 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.598379696 +0000 UTC m=+151.259060923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.128808 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:26 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:26 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:26 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.128857 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.141224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8b7a329a07649899b551ae87a6f0addb87d59e658c29e01d856a541b41d12234"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.142700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"7cc464b4681da390e54d1b132667c2272a7ebfaf973359b4612e6333b7f74d86"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.143532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dae3b04123988aedb1666e8da0a06a41582ee208c9706060f15cc192d09055df"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.144680 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3499ffe53d97489b3f0dd4307384cbf35bd7fdf24c95a595adab4d859b82dc1b"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.144995 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.147080 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"d9f7685fc5a55102d23825a3470c75e329ec2571df8091966a21bf2cca61fb08"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.164379 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" podStartSLOduration=129.164355234 podStartE2EDuration="2m9.164355234s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.17256495 +0000 UTC m=+149.833246187" watchObservedRunningTime="2026-01-30 13:06:26.164355234 +0000 UTC m=+150.825036471" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.199178 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.199333 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.699306485 +0000 UTC m=+151.359987712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.199430 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.199716 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.699704665 +0000 UTC m=+151.360385892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.299951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.301638 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.801619688 +0000 UTC m=+151.462300915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.389929 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 13:01:25 +0000 UTC, rotation deadline is 2026-11-05 16:55:11.650576414 +0000 UTC Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.389992 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6699h48m45.260587259s for next certificate rotation Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.401914 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.402270 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.90225891 +0000 UTC m=+151.562940137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.432181 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" podStartSLOduration=130.43216497 podStartE2EDuration="2m10.43216497s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:26.409907344 +0000 UTC m=+151.070588571" watchObservedRunningTime="2026-01-30 13:06:26.43216497 +0000 UTC m=+151.092846197" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.503446 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.503767 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.003737813 +0000 UTC m=+151.664419040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.503998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.504300 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.004291726 +0000 UTC m=+151.664972953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.604891 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.605257 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.105237776 +0000 UTC m=+151.765919003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697187 5039 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbtxl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697240 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podUID="f117b241-1e37-4603-bb50-aad0ee886758" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697460 5039 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbtxl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697475 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podUID="f117b241-1e37-4603-bb50-aad0ee886758" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.706183 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.706502 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.206488804 +0000 UTC m=+151.867170031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.807711 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.807987 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.307973567 +0000 UTC m=+151.968654784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.908830 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.909235 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.409221224 +0000 UTC m=+152.069902451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.009710 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.009892 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.509868216 +0000 UTC m=+152.170549433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.009953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.010375 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.510358208 +0000 UTC m=+152.171039435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.110760 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.110856 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.610841067 +0000 UTC m=+152.271522294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.111202 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.111483 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.611474022 +0000 UTC m=+152.272155249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.129454 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:27 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:27 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:27 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.129516 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.213060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.213414 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.713399136 +0000 UTC m=+152.374080363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.314727 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.315686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.815674618 +0000 UTC m=+152.476355845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.355789 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.356039 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.357814 5039 patch_prober.go:28] interesting pod/console-f9d7485db-2cmnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.357861 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.416216 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.416520 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.916504775 +0000 UTC m=+152.577186002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.418955 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.418980 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.433638 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.517410 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.519698 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.019684558 +0000 UTC m=+152.680365785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.542237 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.612073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.612670 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.614860 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.617250 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.620179 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.620484 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.620751 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.120727401 +0000 UTC m=+152.781408628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.668275 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687127 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687184 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687548 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687597 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721478 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.722056 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.222035909 +0000 UTC m=+152.882717186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.745381 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.745429 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.748073 5039 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8cgg4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.748126 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" podUID="56c21f31-0db8-4876-9198-ecf1453378eb" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.792297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.801902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.806842 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823254 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.823435 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.323401179 +0000 UTC m=+152.984082416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.823839 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.32382323 +0000 UTC m=+152.984504457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.824635 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.869990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.929320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.930938 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.430917897 +0000 UTC m=+153.091599134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.933052 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.013184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.031146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.031625 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.53157856 +0000 UTC m=+153.192259787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.126125 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.131306 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:28 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:28 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:28 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.131356 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.133360 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.134392 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.634377215 +0000 UTC m=+153.295058442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.189643 5039 generic.go:334] "Generic (PLEG): container finished" podID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerID="a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0" exitCode=0 Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.192613 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerDied","Data":"a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0"} Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.210469 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.235771 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.236961 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.736942274 +0000 UTC m=+153.397623501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.251418 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.276837 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.278772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.293238 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.314581 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.327883 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.338143 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.339907 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.839885422 +0000 UTC m=+153.500566659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.382682 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.448680 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.451246 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457415 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457781 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457832 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457874 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457913 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.458130 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.958114278 +0000 UTC m=+153.618795505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.467413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.495255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.521359 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.558816 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.559132 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.059106909 +0000 UTC m=+153.719788136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559368 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559394 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559434 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.559736 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.059721554 +0000 UTC m=+153.720402781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.560551 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.560826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.594910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.628465 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.660940 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661129 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.661171 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.161144254 +0000 UTC m=+153.821825481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661208 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661346 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.661700 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.161686687 +0000 UTC m=+153.822367914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661891 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.662531 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.665275 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.666501 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.685953 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.698901 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.762487 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.762641 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.262623297 +0000 UTC m=+153.923304524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763335 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763369 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763413 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.763693 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.263680812 +0000 UTC m=+153.924362039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.778115 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.865381 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.873320 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.879817 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.885950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.886050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.386033347 +0000 UTC m=+154.046714574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886567 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886599 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886677 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886706 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.887084 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.887322 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.387311088 +0000 UTC m=+154.047992315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.887657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.960143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.993574 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.994688 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.494659452 +0000 UTC m=+154.155340679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.995171 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.997513 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.029091 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.046883 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.101697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.101978 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.601966855 +0000 UTC m=+154.262648072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.134884 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:29 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:29 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:29 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.134937 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.177403 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.178001 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.180124 5039 reflector.go:561] object-"openshift-kube-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.180150 5039 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.181200 5039 reflector.go:561] object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-5pr6n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.181239 5039 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-5pr6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-5pr6n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.201313 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202231 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202255 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.202396 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.702381512 +0000 UTC m=+154.363062729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.225939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerStarted","Data":"c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.225991 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerStarted","Data":"31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.229608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerStarted","Data":"cbd7e75d20e256e4f099405468b97eec039052c798b34b5c78d34219ddaab285"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.246262 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"2b72dbbc8f49d8f8a3f27474f02cd706eb00601afb020fe1d798a07d90b72e78"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.260729 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.260710186 podStartE2EDuration="2.260710186s" podCreationTimestamp="2026-01-30 13:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:29.2583591 +0000 UTC m=+153.919040327" watchObservedRunningTime="2026-01-30 13:06:29.260710186 +0000 UTC m=+153.921391413" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305994 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.307368 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.807350779 +0000 UTC m=+154.468032006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.307959 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.318606 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.408190 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.408913 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.908898633 +0000 UTC m=+154.569579860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.479422 5039 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.509310 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.509708 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.00969479 +0000 UTC m=+154.670376017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.539566 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.612101 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.612646 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.112631917 +0000 UTC m=+154.773313144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.704164 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.714240 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.717832 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.217814169 +0000 UTC m=+154.878495476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.740707 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820608 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820706 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.820864 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.320839039 +0000 UTC m=+154.981520266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.821063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.821529 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.822684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.822971 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.822997 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.322987641 +0000 UTC m=+154.983668868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.829494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.845924 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf" (OuterVolumeSpecName: "kube-api-access-pvstf") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "kube-api-access-pvstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.923958 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.924415 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.924441 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.924527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.424507315 +0000 UTC m=+155.085188542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.926720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.974887 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63af1747_5ca2_4c06_89fa_dc040184452d.slice/crio-be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201 WatchSource:0}: Error finding container be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201: Status 404 returned error can't find the container with id be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.025876 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.026168 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.526156342 +0000 UTC m=+155.186837569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.040240 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.055732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.126983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.127420 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.627393299 +0000 UTC m=+155.288074526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.128193 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:30 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:30 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:30 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.128246 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.228859 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.229222 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.729204589 +0000 UTC m=+155.389885816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260241 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260398 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerStarted","Data":"a99dc0fa20017d582143029df54b4ce3a2a13e3646da5203bf1ec4b40fd21d8f"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.262202 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264598 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264657 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerStarted","Data":"75a8306c8bded401082c533b20ec90dbf13e7d641b9e64c4b70d8bcf9fbfedc1"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.268436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"f468123a2f48cd9cd183c8b47e90692b51c99da8ad5621ba0edbba24002de26f"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.271812 5039 generic.go:334] "Generic (PLEG): container finished" podID="ff95d9f7-8598-4335-9969-2de81a196a92" containerID="c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.271887 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerDied","Data":"c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273811 5039 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T13:06:29.479447851Z","Handler":null,"Name":""} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273934 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerDied","Data":"e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273966 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273977 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.275391 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.275504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.276981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerStarted","Data":"be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.308365 5039 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.308406 5039 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.329983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.352804 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.431331 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.435254 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.435302 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.443659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.444072 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.444094 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.444355 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.445897 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.451546 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.463118 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.474485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.510778 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532860 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532962 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.559137 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.566535 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634048 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634095 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634117 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634610 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.658216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.706497 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.765071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.842328 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.845876 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.848720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.940841 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.941176 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.941208 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.008626 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042803 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.043631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.045961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.066287 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.069806 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:31 crc kubenswrapper[5039]: W0130 13:06:31.087131 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66476d2f_ef08_4051_97a8_c2edb46b7004.slice/crio-6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd WatchSource:0}: Error finding container 6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd: Status 404 returned error can't find the container with id 6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.129148 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:31 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:31 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:31 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.129228 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.165260 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289651 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d" exitCode=0 Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289815 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerStarted","Data":"6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.306447 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"61338ec96332fe8f35a7db0a8583779613718c7af185e8b0ef55af84eb400f69"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.315304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerStarted","Data":"33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.317378 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e" exitCode=0 Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.317493 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323341 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerStarted","Data":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323392 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerStarted","Data":"14ef90e3cdef13211956d89d4a3d153760b6e2bccefbbfcedfc9f509521480bd"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323800 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.331241 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" podStartSLOduration=16.331224696 podStartE2EDuration="16.331224696s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:31.330412647 +0000 UTC m=+155.991093894" watchObservedRunningTime="2026-01-30 13:06:31.331224696 +0000 UTC m=+155.991905943" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.366045 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" podStartSLOduration=134.366003284 podStartE2EDuration="2m14.366003284s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:31.358506043 +0000 UTC m=+156.019187280" watchObservedRunningTime="2026-01-30 13:06:31.366003284 +0000 UTC m=+156.026684521" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.397005 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.441174 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.442426 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.445445 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.449204 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.467960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.468047 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.468143 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.547147 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.571292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.572819 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.573156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.574031 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.574484 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.593999 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"ff95d9f7-8598-4335-9969-2de81a196a92\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674110 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff95d9f7-8598-4335-9969-2de81a196a92" (UID: "ff95d9f7-8598-4335-9969-2de81a196a92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"ff95d9f7-8598-4335-9969-2de81a196a92\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674526 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.677884 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff95d9f7-8598-4335-9969-2de81a196a92" (UID: "ff95d9f7-8598-4335-9969-2de81a196a92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.767276 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.776639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843457 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:31 crc kubenswrapper[5039]: E0130 13:06:31.843687 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843702 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843836 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.845702 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.850216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878198 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878522 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878589 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979820 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.980408 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.983384 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.006701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.129292 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:32 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:32 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:32 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.129371 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.149702 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.223380 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.273483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:32 crc kubenswrapper[5039]: W0130 13:06:32.295520 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc79ca838_03cc_4885_969d_5aad41173112.slice/crio-3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3 WatchSource:0}: Error finding container 3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3: Status 404 returned error can't find the container with id 3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.346394 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.349196 5039 generic.go:334] "Generic (PLEG): container finished" podID="312988e0-14fa-43e6-9d03-7c693e868f09" containerID="a9c05e1fefe9c25b182a06957f72f1eb6748f8376afaf8816413e8a36780db31" exitCode=0 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.349302 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerDied","Data":"a9c05e1fefe9c25b182a06957f72f1eb6748f8376afaf8816413e8a36780db31"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358192 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" exitCode=0 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358288 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerStarted","Data":"90c64b07023f646350f17195d3f4849d52b2111fa319dd68d741c4086232a39d"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.373815 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.375230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerDied","Data":"31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.375278 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.490417 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.751444 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.759752 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.137165 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:33 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:33 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:33 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.137235 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.432500 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d" exitCode=0 Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.432574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.451945 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41" exitCode=0 Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.452598 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.452628 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerStarted","Data":"0120e2b5056f23bbdd97f8dbe8160ca27ed1242a594d4e9cbac4c7a337642502"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.618858 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.943693 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.013896 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"312988e0-14fa-43e6-9d03-7c693e868f09\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.014464 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"312988e0-14fa-43e6-9d03-7c693e868f09\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.015654 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "312988e0-14fa-43e6-9d03-7c693e868f09" (UID: "312988e0-14fa-43e6-9d03-7c693e868f09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.037528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "312988e0-14fa-43e6-9d03-7c693e868f09" (UID: "312988e0-14fa-43e6-9d03-7c693e868f09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.116469 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.116537 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.138213 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:34 crc kubenswrapper[5039]: [+]has-synced ok Jan 30 13:06:34 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:34 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.138274 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497426 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerDied","Data":"33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b"} Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497477 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497553 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:35 crc kubenswrapper[5039]: I0130 13:06:35.130746 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:35 crc kubenswrapper[5039]: I0130 13:06:35.134262 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.355966 5039 patch_prober.go:28] interesting pod/console-f9d7485db-2cmnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.356350 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686545 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686927 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686773 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.687024 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.742424 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.742558 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.408905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.417172 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.454107 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:45 crc kubenswrapper[5039]: I0130 13:06:45.433076 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:06:45 crc kubenswrapper[5039]: I0130 13:06:45.575326 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"c95660d06c6d31fa82d2138da8b9d988a3344464138beaf6712a27f6de6dd79b"} Jan 30 13:06:46 crc kubenswrapper[5039]: I0130 13:06:46.581852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"954dc548c21d6cfb4748ee5e6ed1ff93d2c6b45d01fd71597cd0b64ec7c120a8"} Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.692071 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.743905 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.748809 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:50 crc kubenswrapper[5039]: I0130 13:06:50.518503 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:58 crc kubenswrapper[5039]: I0130 13:06:58.022915 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:07:04 crc kubenswrapper[5039]: I0130 13:07:04.304371 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:07:07 crc kubenswrapper[5039]: I0130 13:07:07.742134 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:07:07 crc kubenswrapper[5039]: I0130 13:07:07.742505 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.376417 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:09 crc kubenswrapper[5039]: E0130 13:07:09.376774 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.376804 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.377098 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.377694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.381783 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.381836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.385585 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.385774 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.388820 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482725 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482839 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482932 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.512490 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.709062 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.179922 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.180257 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mckmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gx2hg_openshift-marketplace(c79ca838-03cc-4885-969d-5aad41173112): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.182282 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.340596 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.340846 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wk4tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tbppj_openshift-marketplace(517c44d7-5a31-4d7c-9918-9e051f06902c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.342308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.769061 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.770979 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.783452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884696 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884755 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985746 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985992 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.986101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:15 crc kubenswrapper[5039]: I0130 13:07:15.001591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:15 crc kubenswrapper[5039]: I0130 13:07:15.205992 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:17 crc kubenswrapper[5039]: E0130 13:07:17.305894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" Jan 30 13:07:17 crc kubenswrapper[5039]: E0130 13:07:17.305957 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.622490 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.623004 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s5lrd_openshift-marketplace(5613a050-2fc6-4554-bebe-a8afa71c3815): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.624294 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" Jan 30 13:07:28 crc kubenswrapper[5039]: E0130 13:07:28.118128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.902697 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.902841 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2692s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-759rj_openshift-marketplace(80cb63fe-71b1-42e7-ac04-a81c89920b46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.904034 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742493 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742901 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742973 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.743770 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.743938 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" gracePeriod=600 Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.041104 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.461914 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.462478 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ccjvb_openshift-marketplace(66476d2f-ef08-4051-97a8-c2edb46b7004): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.463686 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" Jan 30 13:07:38 crc kubenswrapper[5039]: I0130 13:07:38.548593 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:38 crc kubenswrapper[5039]: I0130 13:07:38.583913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:39 crc kubenswrapper[5039]: I0130 13:07:39.868496 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" exitCode=0 Jan 30 13:07:39 crc kubenswrapper[5039]: I0130 13:07:39.868582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.053502 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.053985 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlntp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gqxts_openshift-marketplace(63af1747-5ca2-4c06-89fa-dc040184452d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.055394 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.971221 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.971375 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8txw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-prfhj_openshift-marketplace(52b110b9-c1bb-4f99-b0a1-56327188c912): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.973164 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.367207 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.367765 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svlb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wksws_openshift-marketplace(f64e1921-5488-46f8-bf3a-af141cd0c277): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.369187 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.498705 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.499278 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" Jan 30 13:07:41 crc kubenswrapper[5039]: I0130 13:07:41.880504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerStarted","Data":"7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da"} Jan 30 13:07:41 crc kubenswrapper[5039]: I0130 13:07:41.881551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerStarted","Data":"beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628"} Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.883026 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.883473 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.888391 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316" exitCode=0 Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.888463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.891119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerStarted","Data":"54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.893105 5039 generic.go:334] "Generic (PLEG): container finished" podID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerID="1b8eb06d22919a8077dcbf0a18e0fa6ddb0a76ce522bfb707c09b574ba5e4008" exitCode=0 Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.893134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerDied","Data":"1b8eb06d22919a8077dcbf0a18e0fa6ddb0a76ce522bfb707c09b574ba5e4008"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.895624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.898087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"1408bf879052e38cf853c761e7b1b806d70e487e8defedea744c677ff81f4738"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.933251 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5qzx7" podStartSLOduration=206.933234874 podStartE2EDuration="3m26.933234874s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:07:42.930854439 +0000 UTC m=+227.591535686" watchObservedRunningTime="2026-01-30 13:07:42.933234874 +0000 UTC m=+227.593916091" Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.956980 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=28.956960283 podStartE2EDuration="28.956960283s" podCreationTimestamp="2026-01-30 13:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:07:42.954753774 +0000 UTC m=+227.615435011" watchObservedRunningTime="2026-01-30 13:07:42.956960283 +0000 UTC m=+227.617641510" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.258927 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404058 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404258 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae654c46-c11d-44b1-beac-1dd7bcb6b824" (UID: "ae654c46-c11d-44b1-beac-1dd7bcb6b824"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404489 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.416438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae654c46-c11d-44b1-beac-1dd7bcb6b824" (UID: "ae654c46-c11d-44b1-beac-1dd7bcb6b824"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.506122 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910918 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerDied","Data":"beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628"} Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910956 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910976 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.980791 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5" exitCode=0 Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.980848 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5"} Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.984933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232"} Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.988981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerStarted","Data":"b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e"} Jan 30 13:07:56 crc kubenswrapper[5039]: I0130 13:07:56.046750 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbppj" podStartSLOduration=6.406579075 podStartE2EDuration="1m25.046734271s" podCreationTimestamp="2026-01-30 13:06:31 +0000 UTC" firstStartedPulling="2026-01-30 13:06:33.463212885 +0000 UTC m=+158.123894112" lastFinishedPulling="2026-01-30 13:07:52.103368081 +0000 UTC m=+236.764049308" observedRunningTime="2026-01-30 13:07:56.043827239 +0000 UTC m=+240.704508556" watchObservedRunningTime="2026-01-30 13:07:56.046734271 +0000 UTC m=+240.707415498" Jan 30 13:07:56 crc kubenswrapper[5039]: I0130 13:07:56.999406 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232" exitCode=0 Jan 30 13:07:57 crc kubenswrapper[5039]: I0130 13:07:56.999487 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232"} Jan 30 13:08:02 crc kubenswrapper[5039]: I0130 13:08:02.224974 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:02 crc kubenswrapper[5039]: I0130 13:08:02.225360 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:04 crc kubenswrapper[5039]: I0130 13:08:04.478440 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:04 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:04 crc kubenswrapper[5039]: > Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.546682 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.594904 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.788258 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:08:14 crc kubenswrapper[5039]: I0130 13:08:14.110379 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" containerID="cri-o://b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" gracePeriod=2 Jan 30 13:08:18 crc kubenswrapper[5039]: I0130 13:08:18.135082 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" exitCode=0 Jan 30 13:08:18 crc kubenswrapper[5039]: I0130 13:08:18.135182 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e"} Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600130 5039 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.600455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600491 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600652 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601040 5039 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601248 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601365 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601458 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601675 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601690 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601737 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602622 5039 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602840 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602866 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602883 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602907 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602918 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602932 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602942 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602954 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602964 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602984 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602993 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.603007 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603040 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603192 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603210 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603224 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603237 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603252 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603266 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.603426 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603441 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603597 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.647814 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775452 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775558 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775634 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775817 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877097 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877270 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877322 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877289 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877358 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877359 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877375 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877461 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877479 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877496 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.949522 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.157668 5039 generic.go:334] "Generic (PLEG): container finished" podID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerID="54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a" exitCode=0 Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.157784 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerDied","Data":"54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a"} Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.158626 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.163555 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.165557 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.166491 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" exitCode=0 Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.166525 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" exitCode=2 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.176389 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.179961 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.180881 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" exitCode=0 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.180939 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" exitCode=0 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.181291 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.225792 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226283 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226820 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226871 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.227828 5039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-tbppj.188f842bcc5b88fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-tbppj,UID:517c44d7-5a31-4d7c-9918-9e051f06902c,APIVersion:v1,ResourceVersion:28725,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,LastTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.897691 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.898815 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022347 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock" (OuterVolumeSpecName: "var-lock") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022721 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022748 5039 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.033322 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.083941 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.084621 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.087253 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.097450 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.099929 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.100677 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.101223 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.101657 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.124957 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187286 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerDied","Data":"7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da"} Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187508 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.192136 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.193147 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" exitCode=0 Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.193296 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.197477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"0120e2b5056f23bbdd97f8dbe8160ca27ed1242a594d4e9cbac4c7a337642502"} Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.197563 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.198354 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.199163 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.199782 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.207471 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.208030 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.210400 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226337 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226408 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226452 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226490 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226532 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226769 5039 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226780 5039 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226789 5039 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.227594 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities" (OuterVolumeSpecName: "utilities") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.229907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj" (OuterVolumeSpecName: "kube-api-access-wk4tj") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "kube-api-access-wk4tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.328488 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.328686 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.345764 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.429656 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.523708 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.524369 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.525146 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.525876 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.526464 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.526869 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.115897 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.393680 5039 scope.go:117] "RemoveContainer" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615155 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:24 crc kubenswrapper[5039]: E0130 13:08:24.615693 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615741 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} err="failed to get container status \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615776 5039 scope.go:117] "RemoveContainer" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:24 crc kubenswrapper[5039]: W0130 13:08:24.638093 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655 WatchSource:0}: Error finding container f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655: Status 404 returned error can't find the container with id f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655 Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.878587 5039 scope.go:117] "RemoveContainer" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.019140 5039 scope.go:117] "RemoveContainer" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.045239 5039 scope.go:117] "RemoveContainer" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.099394 5039 scope.go:117] "RemoveContainer" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.159907 5039 scope.go:117] "RemoveContainer" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.161943 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": container with ID starting with 4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693 not found: ID does not exist" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.162060 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693"} err="failed to get container status \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": rpc error: code = NotFound desc = could not find container \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": container with ID starting with 4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.162097 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.163997 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} err="failed to get container status \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164064 5039 scope.go:117] "RemoveContainer" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.164875 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": container with ID starting with f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a not found: ID does not exist" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164908 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a"} err="failed to get container status \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": rpc error: code = NotFound desc = could not find container \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": container with ID starting with f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164929 5039 scope.go:117] "RemoveContainer" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.165832 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": container with ID starting with 1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592 not found: ID does not exist" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.165858 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592"} err="failed to get container status \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": rpc error: code = NotFound desc = could not find container \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": container with ID starting with 1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.165876 5039 scope.go:117] "RemoveContainer" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.166234 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": container with ID starting with 85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed not found: ID does not exist" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166258 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed"} err="failed to get container status \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": rpc error: code = NotFound desc = could not find container \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": container with ID starting with 85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166276 5039 scope.go:117] "RemoveContainer" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.166708 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": container with ID starting with 8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755 not found: ID does not exist" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166735 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755"} err="failed to get container status \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": rpc error: code = NotFound desc = could not find container \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": container with ID starting with 8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166754 5039 scope.go:117] "RemoveContainer" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.167199 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": container with ID starting with 11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44 not found: ID does not exist" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.167225 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44"} err="failed to get container status \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": rpc error: code = NotFound desc = could not find container \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": container with ID starting with 11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.167242 5039 scope.go:117] "RemoveContainer" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.197679 5039 scope.go:117] "RemoveContainer" containerID="22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.211463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655"} Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.242459 5039 scope.go:117] "RemoveContainer" containerID="2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.827233 5039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-tbppj.188f842bcc5b88fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-tbppj,UID:517c44d7-5a31-4d7c-9918-9e051f06902c,APIVersion:v1,ResourceVersion:28725,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,LastTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.095685 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.096463 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.219549 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.219617 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.220520 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.220833 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221070 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221207 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221797 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.222250 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.222545 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.223556 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.224321 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.225532 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.225811 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226122 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226414 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226687 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228264 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228334 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228844 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.229191 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231306 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231579 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231824 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.232203 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.232467 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233094 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233300 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233523 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233701 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.233810 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233907 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.234851 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.237085 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerStarted","Data":"e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.237853 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238199 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238600 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238881 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239172 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239462 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239716 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.241977 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.242087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243100 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243285 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243448 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243646 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243862 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.244366 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.244867 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.245318 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.248092 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.248126 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.249646 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.249857 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250041 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250190 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250333 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250478 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250856 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.251137 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.251290 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.289306 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.289667 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290092 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290328 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290585 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.290614 5039 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290825 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.491736 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.892619 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.254199 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.693827 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.889246 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:318e7c877b3cf6c5b263eeb634c46a3f24a2c88cd95c89829287f19b1a6f8bab\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:36ccdfb4dced86283da1b94956e2e4a71df6b016812849741c7a3c8867892f8f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1679208681},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a7598d8f0c280ef5ea17585638eb9a1da7cb4b597886b2a8baada612c4ff908c\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cb548db49a0e34354c020b8f19cb922b4ade7174abf0155a4b7b65e8e0281341\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890127 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890461 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890824 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.891234 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.891257 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.261322 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerStarted","Data":"9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.264622 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerStarted","Data":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.265769 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.266247 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.266670 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.267537 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.267948 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268197 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerStarted","Data":"5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268444 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268958 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269254 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269432 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269701 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269905 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270088 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270237 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270417 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270443 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerStarted","Data":"39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270658 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270964 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271174 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271376 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271866 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272078 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272267 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272461 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272652 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272802 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272944 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273358 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273499 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerStarted","Data":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273567 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274186 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274454 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274776 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275202 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275521 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275836 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276067 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276292 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276626 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.629135 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.629451 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.682054 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.682654 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683137 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683641 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683919 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684220 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684463 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684722 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684984 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.685239 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.779031 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.779088 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.996366 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.996579 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.278686 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280291 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280611 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280846 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281135 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281420 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281651 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281989 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.282282 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: E0130 13:08:29.295044 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.308110 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.310029 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.816770 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:29 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:29 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.039631 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:30 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:30 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.351407 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:30 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:30 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.766843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.766910 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.819400 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820064 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820436 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820680 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.821156 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.821392 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822163 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822384 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822557 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822744 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.165483 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.165808 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212237 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212580 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212738 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213005 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213405 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213590 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213733 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213873 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.214051 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.214270 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.767660 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.768255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.806793 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.807429 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.807744 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.808303 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.808864 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809231 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809531 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809838 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.810209 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.810504 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.339104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.339634 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.340228 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.340793 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341275 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341543 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341857 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342317 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342564 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342882 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: E0130 13:08:32.496287 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="6.4s" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.304872 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.305210 5039 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3" exitCode=1 Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.305355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3"} Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306004 5039 scope.go:117] "RemoveContainer" containerID="26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306502 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306892 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307152 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307320 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307469 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307613 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307751 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307895 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.308181 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.308415 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.620509 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.650624 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.093036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.093893 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094223 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094428 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094694 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095104 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095352 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095569 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095824 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.096068 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.096396 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.106098 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.106132 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:34 crc kubenswrapper[5039]: E0130 13:08:34.106642 5039 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.107194 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: W0130 13:08:34.125473 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384 WatchSource:0}: Error finding container bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384: Status 404 returned error can't find the container with id bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384 Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.319377 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.320452 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fd29609b01d9fc64d21bcdb52277085cb04b167a315096058b6fc7654d09649"} Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.320922 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321321 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384"} Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321854 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322139 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322506 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322737 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323028 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323250 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323476 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.324004 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.327564 5039 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="15af13e2f02df3f1ec93992223c2f3ab2e891d38c4bc8de93fb6be4f34e211e6" exitCode=0 Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.327661 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"15af13e2f02df3f1ec93992223c2f3ab2e891d38c4bc8de93fb6be4f34e211e6"} Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328165 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328210 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328654 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: E0130 13:08:35.328815 5039 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.329114 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.329650 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.330211 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.330715 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331167 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331625 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331991 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.332382 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.332876 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4a7a78d98afcf2eead5576b3bc3cf8cf9b85e970484dadcf220f20e827f7a70"} Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335424 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8744dd86348f3423384ede60721fa9b3febdf356cf64362bff38533e8ecf823a"} Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335439 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e038a52df0f4c8d6ee32b55d9f3246dc4d7c01807de8d31f3fceb9579ec2e0f8"} Jan 30 13:08:37 crc kubenswrapper[5039]: I0130 13:08:37.342374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7bd9716e826e4bc6afa4cd10374336e333909f5ad4f41b6d6effdc363b872412"} Jan 30 13:08:37 crc kubenswrapper[5039]: I0130 13:08:37.914522 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.353327 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e4a0fc206d21de4d678395d7e2f2e7d6795ba536292e272229bf897ea775895"} Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.689543 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.823761 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.865562 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.046758 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.086390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.347659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.358317 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.358345 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.367279 5039 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.386438 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:40 crc kubenswrapper[5039]: I0130 13:08:40.828966 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:41 crc kubenswrapper[5039]: I0130 13:08:41.225506 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:41 crc kubenswrapper[5039]: I0130 13:08:41.956185 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="41e17646-e011-4c40-8ba5-182d3e469a26" Jan 30 13:08:43 crc kubenswrapper[5039]: I0130 13:08:43.620218 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:43 crc kubenswrapper[5039]: I0130 13:08:43.625516 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:44 crc kubenswrapper[5039]: I0130 13:08:44.413747 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:51 crc kubenswrapper[5039]: I0130 13:08:51.986815 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:08:52 crc kubenswrapper[5039]: I0130 13:08:52.737441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:08:52 crc kubenswrapper[5039]: I0130 13:08:52.881255 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.004529 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.143396 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.162522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.473387 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.519659 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.542277 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.569174 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.647984 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.669392 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.672073 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.751370 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.905071 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.994876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.060067 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.081785 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.246578 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.283806 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.325078 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.359356 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.445930 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.490598 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.686855 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.749456 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.833544 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.899077 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.068431 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.158722 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.194664 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.377440 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.398507 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.469627 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.477775 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.672176 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.678403 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.776543 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.831481 5039 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.846766 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.850157 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.961322 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.961423 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.984025 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.023899 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.046972 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.147428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.184547 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.185625 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.201446 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.217545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.304877 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.324268 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.330551 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.363872 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.376875 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.402873 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.438760 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.469408 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.550578 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.564486 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.575118 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.592943 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.672834 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.694518 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.695835 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.704823 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.735497 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.007688 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.038371 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.045481 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.061075 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.079813 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.083120 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.087824 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.092441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.105165 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.137340 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.210924 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.223501 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.227662 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.321720 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.470828 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.486837 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.527385 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.591200 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.627133 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.700552 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.719074 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.769423 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.817196 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.850773 5039 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.877542 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.968323 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.003086 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.025177 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.098973 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.135489 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.147416 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.210264 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.214197 5039 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.214876 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gqxts" podStartSLOduration=33.491922154 podStartE2EDuration="2m30.214864057s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:31.319828682 +0000 UTC m=+155.980509909" lastFinishedPulling="2026-01-30 13:08:28.042770565 +0000 UTC m=+272.703451812" observedRunningTime="2026-01-30 13:08:42.037588352 +0000 UTC m=+286.698269599" watchObservedRunningTime="2026-01-30 13:08:58.214864057 +0000 UTC m=+302.875545294" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215099 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-759rj" podStartSLOduration=33.168356155 podStartE2EDuration="2m28.215094384s" podCreationTimestamp="2026-01-30 13:06:30 +0000 UTC" firstStartedPulling="2026-01-30 13:06:32.363911864 +0000 UTC m=+157.024593091" lastFinishedPulling="2026-01-30 13:08:27.410650093 +0000 UTC m=+272.071331320" observedRunningTime="2026-01-30 13:08:42.065352877 +0000 UTC m=+286.726034104" watchObservedRunningTime="2026-01-30 13:08:58.215094384 +0000 UTC m=+302.875775611" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215327 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wksws" podStartSLOduration=33.373883254 podStartE2EDuration="2m30.215324551s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.265876372 +0000 UTC m=+154.926557599" lastFinishedPulling="2026-01-30 13:08:27.107317669 +0000 UTC m=+271.767998896" observedRunningTime="2026-01-30 13:08:41.908646561 +0000 UTC m=+286.569327798" watchObservedRunningTime="2026-01-30 13:08:58.215324551 +0000 UTC m=+302.876005778" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215486 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gx2hg" podStartSLOduration=35.614622135 podStartE2EDuration="2m27.215483496s" podCreationTimestamp="2026-01-30 13:06:31 +0000 UTC" firstStartedPulling="2026-01-30 13:06:33.433245314 +0000 UTC m=+158.093926541" lastFinishedPulling="2026-01-30 13:08:25.034106675 +0000 UTC m=+269.694787902" observedRunningTime="2026-01-30 13:08:41.978615219 +0000 UTC m=+286.639296456" watchObservedRunningTime="2026-01-30 13:08:58.215483496 +0000 UTC m=+302.876164713" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.216690 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ccjvb" podStartSLOduration=31.941258171 podStartE2EDuration="2m28.216685692s" podCreationTimestamp="2026-01-30 13:06:30 +0000 UTC" firstStartedPulling="2026-01-30 13:06:31.300330043 +0000 UTC m=+155.961011270" lastFinishedPulling="2026-01-30 13:08:27.575757564 +0000 UTC m=+272.236438791" observedRunningTime="2026-01-30 13:08:42.021841423 +0000 UTC m=+286.682522670" watchObservedRunningTime="2026-01-30 13:08:58.216685692 +0000 UTC m=+302.877366909" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.217024 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prfhj" podStartSLOduration=32.805161915 podStartE2EDuration="2m30.217006162s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.261922127 +0000 UTC m=+154.922603354" lastFinishedPulling="2026-01-30 13:08:27.673766374 +0000 UTC m=+272.334447601" observedRunningTime="2026-01-30 13:08:41.893098528 +0000 UTC m=+286.553779785" watchObservedRunningTime="2026-01-30 13:08:58.217006162 +0000 UTC m=+302.877687389" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.217087 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5lrd" podStartSLOduration=36.040485804 podStartE2EDuration="2m30.217084255s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.276829826 +0000 UTC m=+154.937511053" lastFinishedPulling="2026-01-30 13:08:24.453428277 +0000 UTC m=+269.114109504" observedRunningTime="2026-01-30 13:08:42.049595607 +0000 UTC m=+286.710276834" watchObservedRunningTime="2026-01-30 13:08:58.217084255 +0000 UTC m=+302.877765482" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218409 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218450 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218765 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218800 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218783 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.228114 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.235271 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.235255607 podStartE2EDuration="19.235255607s" podCreationTimestamp="2026-01-30 13:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:08:58.234161834 +0000 UTC m=+302.894843091" watchObservedRunningTime="2026-01-30 13:08:58.235255607 +0000 UTC m=+302.895936834" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.268852 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.278583 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.291680 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.361774 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.368968 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.385893 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.421528 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.435223 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.499694 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.561174 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.602294 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.802856 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.886495 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.891761 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.905000 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.943923 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.978443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.016955 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.061352 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.061476 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.072622 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.107330 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.107398 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.113419 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.181126 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.247918 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.266211 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.295558 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.308452 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.356425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.378369 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.472306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.502789 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.556400 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.636342 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.787600 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.883674 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.980189 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.008268 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.035126 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.056314 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.120189 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" path="/var/lib/kubelet/pods/517c44d7-5a31-4d7c-9918-9e051f06902c/volumes" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.145457 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.166167 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.223368 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.226728 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.412258 5039 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.425716 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.515828 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.541188 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.602969 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.669235 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.788818 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.997860 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.044413 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.092638 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.193382 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.238237 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.256228 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.258898 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.369370 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.505169 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.516319 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.523349 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.547296 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.683650 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.697278 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.796342 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.805440 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.841144 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.854572 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.893088 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.903024 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.096507 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.157698 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.251776 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.440214 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.493523 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.494087 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.555911 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.575899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.623389 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.630586 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.651997 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.668122 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.686677 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.700606 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.710899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.877710 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.886173 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.900584 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.930310 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.962150 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.962788 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.998027 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.119342 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.158446 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.159617 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.306139 5039 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.306379 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" gracePeriod=5 Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.402978 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.455156 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.471322 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.479350 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.509721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.738476 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.915441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.977304 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.981265 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.987184 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.989681 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.101182 5039 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.121984 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.189100 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.196200 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.264608 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.422599 5039 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.428165 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.548078 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.591560 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.606999 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.647704 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.721056 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.838247 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.908258 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.968341 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.001066 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.036463 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.200383 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.288685 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.391936 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.514216 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.534316 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.614064 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.709438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.755310 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.031831 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.103439 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.105468 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.350406 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.359350 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.598842 5039 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.645769 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.705557 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.805049 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.830635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.901912 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.924523 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.043490 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.513672 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.709761 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.336246 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.544980 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.545301 5039 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" exitCode=137 Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.871811 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.872204 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042778 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042807 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042838 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042886 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042926 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042982 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043627 5039 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043664 5039 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043682 5039 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043699 5039 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.050906 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.116953 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.146238 5039 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556419 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556525 5039 scope.go:117] "RemoveContainer" containerID="ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:09:09 crc kubenswrapper[5039]: E0130 13:09:09.630510 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655\": RecentStats: unable to find data in memory cache]" Jan 30 13:09:10 crc kubenswrapper[5039]: I0130 13:09:10.105917 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.662965 5039 generic.go:334] "Generic (PLEG): container finished" podID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" exitCode=0 Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.663063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c"} Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.664703 5039 scope.go:117] "RemoveContainer" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.669477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed"} Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.670105 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.671241 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:09:28 crc kubenswrapper[5039]: I0130 13:09:28.082938 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.629538 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.630532 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" containerID="cri-o://b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" gracePeriod=30 Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.708593 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.708791 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" containerID="cri-o://dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" gracePeriod=30 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.136509 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151413 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151510 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151551 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152390 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152981 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config" (OuterVolumeSpecName: "config") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.159098 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw" (OuterVolumeSpecName: "kube-api-access-xxsvw") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "kube-api-access-xxsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.162241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.202461 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252639 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252768 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config" (OuterVolumeSpecName: "config") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253669 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253737 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254159 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254181 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254195 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254208 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254220 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254232 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254243 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.258539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8" (OuterVolumeSpecName: "kube-api-access-5g7q8") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "kube-api-access-5g7q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.259160 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.356157 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.356190 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.669823 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670153 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670170 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670184 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670192 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670223 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670230 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670242 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-utilities" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670250 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-utilities" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670259 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670268 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670283 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-content" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670292 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-content" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670447 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670465 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670474 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670483 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670492 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.671053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.681332 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.724905 5039 generic.go:334] "Generic (PLEG): container finished" podID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" exitCode=0 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.724987 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerDied","Data":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725047 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerDied","Data":"d60fc3b8d8ed24515335919a12303771c5bf7a63a5e1dd33ab85006cd1be0e0c"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725073 5039 scope.go:117] "RemoveContainer" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725592 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.735639 5039 generic.go:334] "Generic (PLEG): container finished" podID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" exitCode=0 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.736250 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.736298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerDied","Data":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.739330 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerDied","Data":"c1989ba7ea2f4b8b7a01d3ddedfb906d00ef966d8777591dbcf3cc6d99cf44c4"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.743964 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.744841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.746929 5039 scope.go:117] "RemoveContainer" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.748743 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749151 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749315 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749808 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750091 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750155 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.750642 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": container with ID starting with dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328 not found: ID does not exist" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750688 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} err="failed to get container status \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": rpc error: code = NotFound desc = could not find container \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": container with ID starting with dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328 not found: ID does not exist" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750721 5039 scope.go:117] "RemoveContainer" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750952 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769318 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769397 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769586 5039 scope.go:117] "RemoveContainer" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.772812 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": container with ID starting with b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a not found: ID does not exist" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.772869 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} err="failed to get container status \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": rpc error: code = NotFound desc = could not find container \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": container with ID starting with b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a not found: ID does not exist" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775578 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775610 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775635 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776088 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776128 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.792489 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.796614 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877243 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877316 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877420 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877527 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879244 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879825 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879831 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.880756 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.881494 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.881580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.896773 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.897240 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.986122 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.077798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.104945 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" path="/var/lib/kubelet/pods/2834d334-6df4-46d7-afc6-390cfdcfb22f/volumes" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.106107 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" path="/var/lib/kubelet/pods/bd5d4606-2412-4538-8745-dbab7d52cde9/volumes" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.243321 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.327303 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.746919 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerStarted","Data":"aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.746973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerStarted","Data":"37ea52937516bf02df5e8685f08fd90099b41853614ae4429422f4578350c55b"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.747530 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" event={"ID":"feb4feb2-44c2-4a0e-9f5c-33651b768526","Type":"ContainerStarted","Data":"fb5e0f8f6442a9a34cb50b54e988c2b642f0c0873e61fa9e26766b3ede71d046"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" event={"ID":"feb4feb2-44c2-4a0e-9f5c-33651b768526","Type":"ContainerStarted","Data":"6730721f46f7e97542699f8309a8a97b21e0b2488a6a9d0d0aa280f244db1ee7"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749326 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.754170 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.765899 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" podStartSLOduration=1.7658819669999999 podStartE2EDuration="1.765881967s" podCreationTimestamp="2026-01-30 13:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:34.764359561 +0000 UTC m=+339.425040818" watchObservedRunningTime="2026-01-30 13:09:34.765881967 +0000 UTC m=+339.426563194" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.781393 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" podStartSLOduration=1.781368342 podStartE2EDuration="1.781368342s" podCreationTimestamp="2026-01-30 13:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:34.779407423 +0000 UTC m=+339.440088690" watchObservedRunningTime="2026-01-30 13:09:34.781368342 +0000 UTC m=+339.442049579" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.978209 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:09:35 crc kubenswrapper[5039]: I0130 13:09:35.382085 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:36 crc kubenswrapper[5039]: I0130 13:09:36.414417 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:09:36 crc kubenswrapper[5039]: I0130 13:09:36.632574 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.409582 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.410327 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" containerID="cri-o://aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" gracePeriod=30 Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.906242 5039 generic.go:334] "Generic (PLEG): container finished" podID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerID="aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" exitCode=0 Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.906365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerDied","Data":"aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b"} Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.344090 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.437943 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.473892 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474091 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474149 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.475181 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config" (OuterVolumeSpecName: "config") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.475920 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.480118 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.480356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb" (OuterVolumeSpecName: "kube-api-access-jt9fb") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "kube-api-access-jt9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.575986 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576102 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576126 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576137 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.623715 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:41 crc kubenswrapper[5039]: E0130 13:09:41.623970 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.623989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.624122 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.624586 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.641560 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780477 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780546 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780606 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881719 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.883060 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.883143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.884907 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.903039 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerDied","Data":"37ea52937516bf02df5e8685f08fd90099b41853614ae4429422f4578350c55b"} Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915739 5039 scope.go:117] "RemoveContainer" containerID="aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915821 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.954973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.961693 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.966356 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.105437 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" path="/var/lib/kubelet/pods/928a1452-16a2-4200-ba20-b6afce87e2a9/volumes" Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.392927 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:42 crc kubenswrapper[5039]: W0130 13:09:42.398198 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e29f6f_4480_4802_864e_9462d538a106.slice/crio-1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023 WatchSource:0}: Error finding container 1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023: Status 404 returned error can't find the container with id 1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023 Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.410903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerStarted","Data":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerStarted","Data":"1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023"} Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922713 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:43 crc kubenswrapper[5039]: I0130 13:09:43.296463 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:43 crc kubenswrapper[5039]: I0130 13:09:43.318769 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" podStartSLOduration=3.318710488 podStartE2EDuration="3.318710488s" podCreationTimestamp="2026-01-30 13:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:42.943856647 +0000 UTC m=+347.604537874" watchObservedRunningTime="2026-01-30 13:09:43.318710488 +0000 UTC m=+347.979391725" Jan 30 13:09:58 crc kubenswrapper[5039]: I0130 13:09:58.426339 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:07 crc kubenswrapper[5039]: I0130 13:10:07.742958 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:10:07 crc kubenswrapper[5039]: I0130 13:10:07.743714 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.616222 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.616975 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" containerID="cri-o://8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" gracePeriod=30 Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.990423 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152860 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152963 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153036 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153669 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca" (OuterVolumeSpecName: "client-ca") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config" (OuterVolumeSpecName: "config") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.158179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.159430 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz" (OuterVolumeSpecName: "kube-api-access-vsbsz") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "kube-api-access-vsbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218100 5039 generic.go:334] "Generic (PLEG): container finished" podID="84e29f6f-4480-4802-864e-9462d538a106" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" exitCode=0 Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerDied","Data":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerDied","Data":"1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023"} Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218190 5039 scope.go:117] "RemoveContainer" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218235 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.231160 5039 scope.go:117] "RemoveContainer" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: E0130 13:10:13.231888 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": container with ID starting with 8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea not found: ID does not exist" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.231941 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} err="failed to get container status \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": rpc error: code = NotFound desc = could not find container \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": container with ID starting with 8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea not found: ID does not exist" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254592 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254670 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254683 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254699 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.255665 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.258665 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.108085 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e29f6f-4480-4802-864e-9462d538a106" path="/var/lib/kubelet/pods/84e29f6f-4480-4802-864e-9462d538a106/volumes" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646156 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:14 crc kubenswrapper[5039]: E0130 13:10:14.646385 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646397 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646486 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646828 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.649209 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650355 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650541 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650708 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650744 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.651050 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.664316 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783625 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783672 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783754 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.884748 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885431 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.886406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.886663 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.890509 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.911869 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.962569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:15 crc kubenswrapper[5039]: I0130 13:10:15.367920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:15 crc kubenswrapper[5039]: W0130 13:10:15.374288 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9152137_064d_446b_9398_e5c615d9132b.slice/crio-d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77 WatchSource:0}: Error finding container d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77: Status 404 returned error can't find the container with id d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77 Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.238795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" event={"ID":"c9152137-064d-446b-9398-e5c615d9132b","Type":"ContainerStarted","Data":"ba2ba85d0e147f57585c92a11659871e33fe5721cae32ba336cdf5c24939aeb0"} Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.239189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" event={"ID":"c9152137-064d-446b-9398-e5c615d9132b","Type":"ContainerStarted","Data":"d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77"} Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.241694 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.259532 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" podStartSLOduration=4.259510464 podStartE2EDuration="4.259510464s" podCreationTimestamp="2026-01-30 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:16.258431651 +0000 UTC m=+380.919112908" watchObservedRunningTime="2026-01-30 13:10:16.259510464 +0000 UTC m=+380.920191711" Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.427929 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.256352 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.257180 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" containerID="cri-o://e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" gracePeriod=2 Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.636255 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.683132 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.683189 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.684156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities" (OuterVolumeSpecName: "utilities") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.690214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw" (OuterVolumeSpecName: "kube-api-access-r8txw") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "kube-api-access-r8txw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784497 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784692 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784705 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.832106 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.886250 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291382 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" exitCode=0 Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291518 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"a99dc0fa20017d582143029df54b4ce3a2a13e3646da5203bf1ec4b40fd21d8f"} Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291548 5039 scope.go:117] "RemoveContainer" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.292006 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.309861 5039 scope.go:117] "RemoveContainer" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.326936 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.327391 5039 scope.go:117] "RemoveContainer" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.331739 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.344621 5039 scope.go:117] "RemoveContainer" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345203 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": container with ID starting with e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9 not found: ID does not exist" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345238 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} err="failed to get container status \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": rpc error: code = NotFound desc = could not find container \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": container with ID starting with e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345262 5039 scope.go:117] "RemoveContainer" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345682 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": container with ID starting with 9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5 not found: ID does not exist" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345703 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5"} err="failed to get container status \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": rpc error: code = NotFound desc = could not find container \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": container with ID starting with 9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345714 5039 scope.go:117] "RemoveContainer" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345968 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": container with ID starting with 6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396 not found: ID does not exist" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.346196 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396"} err="failed to get container status \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": rpc error: code = NotFound desc = could not find container \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": container with ID starting with 6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.472326 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" containerID="cri-o://c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" gracePeriod=15 Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.872496 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000165 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000277 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000329 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000363 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000379 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000396 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000426 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000450 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000485 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000502 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000558 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000782 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.001525 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.002080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.002201 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.003707 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.005533 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.006110 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.007626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb" (OuterVolumeSpecName: "kube-api-access-dwrxb") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "kube-api-access-dwrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.011237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.011809 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.013397 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014592 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.054254 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.054513 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" containerID="cri-o://9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" gracePeriod=2 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.099239 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" path="/var/lib/kubelet/pods/52b110b9-c1bb-4f99-b0a1-56327188c912/volumes" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101733 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101758 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101768 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101778 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101788 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101798 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101808 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101817 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101826 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101835 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101843 5039 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101851 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101860 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101868 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.305173 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" exitCode=0 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.305268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307634 5039 generic.go:334] "Generic (PLEG): container finished" podID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" exitCode=0 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerDied","Data":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307701 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerDied","Data":"e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307715 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307720 5039 scope.go:117] "RemoveContainer" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.331128 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.337286 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.344915 5039 scope.go:117] "RemoveContainer" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: E0130 13:10:24.345355 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": container with ID starting with c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee not found: ID does not exist" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.345571 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} err="failed to get container status \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": rpc error: code = NotFound desc = could not find container \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": container with ID starting with c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee not found: ID does not exist" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.482476 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.607902 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.607977 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.608040 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.609855 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities" (OuterVolumeSpecName: "utilities") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.613435 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp" (OuterVolumeSpecName: "kube-api-access-nlntp") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "kube-api-access-nlntp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.654031 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.654361 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" containerID="cri-o://67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" gracePeriod=2 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.667155 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709487 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709525 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709537 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.062742 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.129997 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.130343 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.130429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.132492 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities" (OuterVolumeSpecName: "utilities") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.132918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s" (OuterVolumeSpecName: "kube-api-access-2692s") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "kube-api-access-2692s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.177428 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231761 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231832 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231859 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315432 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" exitCode=0 Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315522 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315563 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"90c64b07023f646350f17195d3f4849d52b2111fa319dd68d741c4086232a39d"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315605 5039 scope.go:117] "RemoveContainer" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.320973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.321081 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.346668 5039 scope.go:117] "RemoveContainer" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.356059 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.369422 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.374279 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.377257 5039 scope.go:117] "RemoveContainer" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.377886 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.405333 5039 scope.go:117] "RemoveContainer" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.406151 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": container with ID starting with 67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4 not found: ID does not exist" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406219 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} err="failed to get container status \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": rpc error: code = NotFound desc = could not find container \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": container with ID starting with 67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4 not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406272 5039 scope.go:117] "RemoveContainer" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.406659 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": container with ID starting with 71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd not found: ID does not exist" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406698 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd"} err="failed to get container status \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": rpc error: code = NotFound desc = could not find container \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": container with ID starting with 71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406724 5039 scope.go:117] "RemoveContainer" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.407266 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": container with ID starting with f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2 not found: ID does not exist" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.407308 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2"} err="failed to get container status \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": rpc error: code = NotFound desc = could not find container \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": container with ID starting with f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2 not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.407328 5039 scope.go:117] "RemoveContainer" containerID="9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.423043 5039 scope.go:117] "RemoveContainer" containerID="a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.438811 5039 scope.go:117] "RemoveContainer" containerID="4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.115549 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" path="/var/lib/kubelet/pods/63af1747-5ca2-4c06-89fa-dc040184452d/volumes" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.117190 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" path="/var/lib/kubelet/pods/80cb63fe-71b1-42e7-ac04-a81c89920b46/volumes" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.117859 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" path="/var/lib/kubelet/pods/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c/volumes" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.666460 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667316 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667352 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667376 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667393 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667418 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667436 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667462 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667476 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667495 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667507 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667532 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667544 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667567 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667579 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667602 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667615 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667637 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667652 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667673 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667686 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667862 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667891 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667918 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667941 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.669062 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.671416 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681576 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681643 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681687 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681898 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682069 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682113 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682365 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682366 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.683796 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.684038 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.684175 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.687573 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.687911 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.693418 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.707128 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.760977 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761085 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761129 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761229 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761462 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862648 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862683 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862722 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862896 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863120 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.864695 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.864727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.865620 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.865706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.867659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.867777 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.868097 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.868218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869207 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.871098 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.880588 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:28 crc kubenswrapper[5039]: I0130 13:10:28.007781 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:28 crc kubenswrapper[5039]: I0130 13:10:28.422386 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.349711 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" event={"ID":"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8","Type":"ContainerStarted","Data":"b6a29c912b0e8679bf68f92878cdf33075ebae67cf41677825be2cfbf768d829"} Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.350124 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.350149 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" event={"ID":"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8","Type":"ContainerStarted","Data":"f042b2bf7554f09f52ce9329440ce62040aa97317fa4335f13bfab16f90c46f9"} Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.358267 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.379189 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" podStartSLOduration=31.379129478 podStartE2EDuration="31.379129478s" podCreationTimestamp="2026-01-30 13:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:29.374171181 +0000 UTC m=+394.034852428" watchObservedRunningTime="2026-01-30 13:10:29.379129478 +0000 UTC m=+394.039810715" Jan 30 13:10:37 crc kubenswrapper[5039]: I0130 13:10:37.742917 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:10:37 crc kubenswrapper[5039]: I0130 13:10:37.743643 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.196371 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.198466 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" containerID="cri-o://e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.204940 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.207801 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" containerID="cri-o://39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.216824 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.217064 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" containerID="cri-o://f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.232722 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.232976 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" containerID="cri-o://5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.248671 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.248995 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" containerID="cri-o://f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.252779 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.253575 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.258209 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.309904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.309967 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.310000 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410693 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410830 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.412410 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.417566 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.427371 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.710072 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742303 5039 generic.go:334] "Generic (PLEG): container finished" podID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerID="f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742662 5039 scope.go:117] "RemoveContainer" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.745336 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.745420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.747496 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.747554 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.749512 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.749604 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.751715 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.751721 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" exitCode=0 Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.113434 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.133694 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.191291 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.195732 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.228734 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.267564 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318038 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318083 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.319038 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.324412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7" (OuterVolumeSpecName: "kube-api-access-svlb7") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "kube-api-access-svlb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.324826 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities" (OuterVolumeSpecName: "utilities") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.329164 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities" (OuterVolumeSpecName: "utilities") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.349293 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g" (OuterVolumeSpecName: "kube-api-access-7p26g") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "kube-api-access-7p26g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.352368 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd" (OuterVolumeSpecName: "kube-api-access-mzssd") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "kube-api-access-mzssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.357467 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.399933 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.402561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419541 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420068 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420242 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420354 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420568 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities" (OuterVolumeSpecName: "utilities") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420768 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420878 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420967 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421066 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421146 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421242 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421328 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421412 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421494 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421579 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421391 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities" (OuterVolumeSpecName: "utilities") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.423907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz" (OuterVolumeSpecName: "kube-api-access-mckmz") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "kube-api-access-mckmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.424023 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6" (OuterVolumeSpecName: "kube-api-access-f5vr6") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "kube-api-access-f5vr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.445617 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523237 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523273 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523284 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523296 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.542342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.623888 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759174 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"cbd7e75d20e256e4f099405468b97eec039052c798b34b5c78d34219ddaab285"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759652 5039 scope.go:117] "RemoveContainer" containerID="e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.761928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.762132 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.767787 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.767802 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"75a8306c8bded401082c533b20ec90dbf13e7d641b9e64c4b70d8bcf9fbfedc1"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.772659 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.772711 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.773512 5039 scope.go:117] "RemoveContainer" containerID="31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.774403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.774433 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.775947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" event={"ID":"76c852b6-fbf0-493f-b157-06882e5f306f","Type":"ContainerStarted","Data":"1d6345a753a9879a4e8b1fbf1384a3803de3dfe7ac7eb1e799980d56859b1a4c"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.775979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" event={"ID":"76c852b6-fbf0-493f-b157-06882e5f306f","Type":"ContainerStarted","Data":"dcb3438fc395ed8c60a8960720a2707b880653d8ae72fceccb9ecfd80acfa28b"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.776540 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.782991 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.790348 5039 scope.go:117] "RemoveContainer" containerID="8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.819936 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" podStartSLOduration=1.819917035 podStartE2EDuration="1.819917035s" podCreationTimestamp="2026-01-30 13:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:53.805604979 +0000 UTC m=+418.466286206" watchObservedRunningTime="2026-01-30 13:10:53.819917035 +0000 UTC m=+418.480598262" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.822798 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.823158 5039 scope.go:117] "RemoveContainer" containerID="5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.828947 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.833787 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.837996 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.852148 5039 scope.go:117] "RemoveContainer" containerID="30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.858820 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.867869 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.880812 5039 scope.go:117] "RemoveContainer" containerID="2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.884953 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.889190 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.894058 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.897298 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.904891 5039 scope.go:117] "RemoveContainer" containerID="39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.928081 5039 scope.go:117] "RemoveContainer" containerID="c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.945313 5039 scope.go:117] "RemoveContainer" containerID="00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.963804 5039 scope.go:117] "RemoveContainer" containerID="f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.977260 5039 scope.go:117] "RemoveContainer" containerID="447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.006271 5039 scope.go:117] "RemoveContainer" containerID="1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.020517 5039 scope.go:117] "RemoveContainer" containerID="f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.101168 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" path="/var/lib/kubelet/pods/501d1ad0-71ea-4bef-8c89-8a68f523e6ec/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.101757 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" path="/var/lib/kubelet/pods/5613a050-2fc6-4554-bebe-a8afa71c3815/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.102473 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" path="/var/lib/kubelet/pods/66476d2f-ef08-4051-97a8-c2edb46b7004/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.103644 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79ca838-03cc-4885-969d-5aad41173112" path="/var/lib/kubelet/pods/c79ca838-03cc-4885-969d-5aad41173112/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.104373 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" path="/var/lib/kubelet/pods/f64e1921-5488-46f8-bf3a-af141cd0c277/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.413132 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414374 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414433 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414445 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414457 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414468 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414480 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414490 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414501 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414510 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414521 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414531 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414539 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414547 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414567 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414576 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414584 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414593 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414611 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414624 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414631 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414647 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414655 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414667 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414676 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414685 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414693 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414808 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414821 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414840 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414863 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.415788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.420919 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.422764 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.533666 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.533843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.534099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.613505 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.614999 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.617024 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635915 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636259 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636348 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636456 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.652109 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737695 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737862 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.758850 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.838864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839835 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.865389 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.939608 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.158717 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:55 crc kubenswrapper[5039]: W0130 13:10:55.165282 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a6fe8f_91d2_44d3_83c2_57f292eeaa38.slice/crio-21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f WatchSource:0}: Error finding container 21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f: Status 404 returned error can't find the container with id 21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.304620 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:55 crc kubenswrapper[5039]: W0130 13:10:55.319428 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd8b28f_4df7_479c_9c89_80afd3be6ed3.slice/crio-b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1 WatchSource:0}: Error finding container b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1: Status 404 returned error can't find the container with id b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805413 5039 generic.go:334] "Generic (PLEG): container finished" podID="abd8b28f-4df7-479c-9c89-80afd3be6ed3" containerID="94d236491d39fe5556c262c176890e2a1ce8a8c84c89f0abe73161e4d23fc761" exitCode=0 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerDied","Data":"94d236491d39fe5556c262c176890e2a1ce8a8c84c89f0abe73161e4d23fc761"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerStarted","Data":"b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.810944 5039 generic.go:334] "Generic (PLEG): container finished" podID="50a6fe8f-91d2-44d3-83c2-57f292eeaa38" containerID="63dbee9b675585ea9681bbab25d4bafd0bfcdbe9dcd7f4793e5de2cbf905b1e0" exitCode=0 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.811067 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerDied","Data":"63dbee9b675585ea9681bbab25d4bafd0bfcdbe9dcd7f4793e5de2cbf905b1e0"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.811115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerStarted","Data":"21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f"} Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.815109 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.820273 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.822405 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.824280 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967243 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967350 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967417 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.015722 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.016741 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.019452 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.031834 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069138 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069271 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069703 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.070373 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.089376 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.142834 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170899 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170925 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.271905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.271950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.272034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.272660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.273034 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.294622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.381593 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.521259 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:57 crc kubenswrapper[5039]: W0130 13:10:57.526397 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdd3549_b206_404b_80e0_dad7eccbea2a.slice/crio-564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08 WatchSource:0}: Error finding container 564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08: Status 404 returned error can't find the container with id 564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.573937 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: W0130 13:10:57.582697 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e68432d_e4f4_4e67_94e4_7e5f89144655.slice/crio-6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce WatchSource:0}: Error finding container 6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce: Status 404 returned error can't find the container with id 6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.823433 5039 generic.go:334] "Generic (PLEG): container finished" podID="abd8b28f-4df7-479c-9c89-80afd3be6ed3" containerID="0ff7ab831bed252b83b5812f3bafb91780bf19176029d24b90bda1c382ae72b2" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.823508 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerDied","Data":"0ff7ab831bed252b83b5812f3bafb91780bf19176029d24b90bda1c382ae72b2"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.826186 5039 generic.go:334] "Generic (PLEG): container finished" podID="50a6fe8f-91d2-44d3-83c2-57f292eeaa38" containerID="351fb8b9d71c4d95a99a921faf536797fc4a004d87df63499d4650ea7cc4e30f" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.826254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerDied","Data":"351fb8b9d71c4d95a99a921faf536797fc4a004d87df63499d4650ea7cc4e30f"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829185 5039 generic.go:334] "Generic (PLEG): container finished" podID="9bdd3549-b206-404b-80e0-dad7eccbea2a" containerID="a11769a04e55afa0f9125bd1316954a82c7fabbfad352b8f66fb96257274534a" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829263 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerDied","Data":"a11769a04e55afa0f9125bd1316954a82c7fabbfad352b8f66fb96257274534a"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829299 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerStarted","Data":"564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843419 5039 generic.go:334] "Generic (PLEG): container finished" podID="9e68432d-e4f4-4e67-94e4-7e5f89144655" containerID="bdca1b4beff14f3d10796b97fd356aa7d23a5832987c799ce9a2f384eec54705" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843469 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerDied","Data":"bdca1b4beff14f3d10796b97fd356aa7d23a5832987c799ce9a2f384eec54705"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerStarted","Data":"6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.849889 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerStarted","Data":"335b2de7300ecde097cd2eb7ab8b69cfbf451dbe03364934ea816e7125fd3d61"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.852333 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerStarted","Data":"999ce39179da05bd620acc2940452f76eba4e9fc0141c85bb9791a7f2b6514b2"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.868308 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s4gcp" podStartSLOduration=2.309264662 podStartE2EDuration="4.868285968s" podCreationTimestamp="2026-01-30 13:10:54 +0000 UTC" firstStartedPulling="2026-01-30 13:10:55.812303502 +0000 UTC m=+420.472984739" lastFinishedPulling="2026-01-30 13:10:58.371324818 +0000 UTC m=+423.032006045" observedRunningTime="2026-01-30 13:10:58.86508682 +0000 UTC m=+423.525768057" watchObservedRunningTime="2026-01-30 13:10:58.868285968 +0000 UTC m=+423.528967205" Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.886416 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n4bnc" podStartSLOduration=2.077090668 podStartE2EDuration="4.88640027s" podCreationTimestamp="2026-01-30 13:10:54 +0000 UTC" firstStartedPulling="2026-01-30 13:10:55.807295523 +0000 UTC m=+420.467976750" lastFinishedPulling="2026-01-30 13:10:58.616605125 +0000 UTC m=+423.277286352" observedRunningTime="2026-01-30 13:10:58.884447436 +0000 UTC m=+423.545128673" watchObservedRunningTime="2026-01-30 13:10:58.88640027 +0000 UTC m=+423.547081497" Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.859402 5039 generic.go:334] "Generic (PLEG): container finished" podID="9e68432d-e4f4-4e67-94e4-7e5f89144655" containerID="e9790e7b4c1919f30a93dfe29660179f0a7f4adee76c47da766ae7e174e7bd43" exitCode=0 Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.859608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerDied","Data":"e9790e7b4c1919f30a93dfe29660179f0a7f4adee76c47da766ae7e174e7bd43"} Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.862943 5039 generic.go:334] "Generic (PLEG): container finished" podID="9bdd3549-b206-404b-80e0-dad7eccbea2a" containerID="c5d825c1ee040576344e66c66e7677404c1ad30ea5708753a405a8dc62d3da05" exitCode=0 Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.864158 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerDied","Data":"c5d825c1ee040576344e66c66e7677404c1ad30ea5708753a405a8dc62d3da05"} Jan 30 13:11:00 crc kubenswrapper[5039]: I0130 13:11:00.869670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerStarted","Data":"2232902d3f9b84258d3a876622381e460b3a81bf6c4c9a3ed033b9457bdcf70c"} Jan 30 13:11:00 crc kubenswrapper[5039]: I0130 13:11:00.886898 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-szn5d" podStartSLOduration=2.182969958 podStartE2EDuration="4.8868803s" podCreationTimestamp="2026-01-30 13:10:56 +0000 UTC" firstStartedPulling="2026-01-30 13:10:57.838048722 +0000 UTC m=+422.498729959" lastFinishedPulling="2026-01-30 13:11:00.541959034 +0000 UTC m=+425.202640301" observedRunningTime="2026-01-30 13:11:00.883660901 +0000 UTC m=+425.544342148" watchObservedRunningTime="2026-01-30 13:11:00.8868803 +0000 UTC m=+425.547561527" Jan 30 13:11:01 crc kubenswrapper[5039]: I0130 13:11:01.876379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerStarted","Data":"dbfa596825add056fa27e6df15b23fa61d818477db539290a38d75ad0aed2cc9"} Jan 30 13:11:01 crc kubenswrapper[5039]: I0130 13:11:01.901303 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dskxq" podStartSLOduration=2.9442974939999997 podStartE2EDuration="5.901278717s" podCreationTimestamp="2026-01-30 13:10:56 +0000 UTC" firstStartedPulling="2026-01-30 13:10:57.845803527 +0000 UTC m=+422.506484754" lastFinishedPulling="2026-01-30 13:11:00.80278475 +0000 UTC m=+425.463465977" observedRunningTime="2026-01-30 13:11:01.897349139 +0000 UTC m=+426.558030366" watchObservedRunningTime="2026-01-30 13:11:01.901278717 +0000 UTC m=+426.561959984" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.759279 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.759843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.813744 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.940828 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.940870 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.124083 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.154252 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.952329 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.143792 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.143842 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.191697 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.382397 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.382464 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.419490 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742320 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742582 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742640 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.743443 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.743540 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" gracePeriod=600 Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908421 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" exitCode=0 Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908529 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908589 5039 scope.go:117] "RemoveContainer" containerID="008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.960658 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.962583 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:08 crc kubenswrapper[5039]: I0130 13:11:08.915466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} Jan 30 13:13:37 crc kubenswrapper[5039]: I0130 13:13:37.742058 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:13:37 crc kubenswrapper[5039]: I0130 13:13:37.743798 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.824233 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.826513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.834768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006365 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006462 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006484 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006506 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006552 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.028501 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.107620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108231 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108261 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108293 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108340 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108758 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.109347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.109508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.115423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.117824 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.129212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.131367 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.164163 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.355444 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.849863 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" event={"ID":"5c0d0319-7c0b-4418-98dc-41bfc1159e9f","Type":"ContainerStarted","Data":"f401bc96aba790d6b95f406a14efdbf32c6d822ebe6cdf965ef877ad0ab9d856"} Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.849929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" event={"ID":"5c0d0319-7c0b-4418-98dc-41bfc1159e9f","Type":"ContainerStarted","Data":"e4ca0e46fe9e52d9a2530b1e06aff2d7169160b630b654d00bf2b8f33a1ff82f"} Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.850264 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.869950 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" podStartSLOduration=1.86993152 podStartE2EDuration="1.86993152s" podCreationTimestamp="2026-01-30 13:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:13:52.869628212 +0000 UTC m=+597.530309479" watchObservedRunningTime="2026-01-30 13:13:52.86993152 +0000 UTC m=+597.530612777" Jan 30 13:14:07 crc kubenswrapper[5039]: I0130 13:14:07.742108 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:14:07 crc kubenswrapper[5039]: I0130 13:14:07.742787 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:14:12 crc kubenswrapper[5039]: I0130 13:14:12.173641 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:14:12 crc kubenswrapper[5039]: I0130 13:14:12.239852 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.281502 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" containerID="cri-o://e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" gracePeriod=30 Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.592271 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.606955 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.606995 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607219 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607991 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608401 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608433 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608719 5039 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.609042 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.613205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.618578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.623490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj" (OuterVolumeSpecName: "kube-api-access-r8lmj") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "kube-api-access-r8lmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.623808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.624218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.637918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709847 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709888 5039 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709899 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709907 5039 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709916 5039 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709924 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743132 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743245 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743862 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.745135 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.745243 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" gracePeriod=600 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144548 5039 scope.go:117] "RemoveContainer" containerID="0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144190 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" exitCode=0 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144667 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147352 5039 generic.go:334] "Generic (PLEG): container finished" podID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" exitCode=0 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerDied","Data":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerDied","Data":"14ef90e3cdef13211956d89d4a3d153760b6e2bccefbbfcedfc9f509521480bd"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147439 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.164209 5039 scope.go:117] "RemoveContainer" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.178643 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.181978 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.184214 5039 scope.go:117] "RemoveContainer" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: E0130 13:14:38.184636 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": container with ID starting with e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b not found: ID does not exist" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.184682 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} err="failed to get container status \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": rpc error: code = NotFound desc = could not find container \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": container with ID starting with e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b not found: ID does not exist" Jan 30 13:14:40 crc kubenswrapper[5039]: I0130 13:14:40.102879 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" path="/var/lib/kubelet/pods/0185664b-147e-4a84-9dc0-31ea880e9db4/volumes" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.207365 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:00 crc kubenswrapper[5039]: E0130 13:15:00.209591 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.209610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.209781 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.210244 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.215901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.215963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216044 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216572 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216593 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.217788 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317157 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.318096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.325081 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.337526 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.527680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.938780 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292342 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerID="10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d" exitCode=0 Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerDied","Data":"10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d"} Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerStarted","Data":"077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9"} Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.502165 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649649 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.650904 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.651489 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.656052 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.656138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4" (OuterVolumeSpecName: "kube-api-access-hqfq4") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "kube-api-access-hqfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.752787 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.752982 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309758 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerDied","Data":"077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9"} Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309835 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9" Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309899 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:16:33 crc kubenswrapper[5039]: I0130 13:16:33.766705 5039 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.431876 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433825 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" containerID="cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433927 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" containerID="cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433993 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" containerID="cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433994 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434068 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" containerID="cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434027 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" containerID="cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434151 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" containerID="cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.483937 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" containerID="cri-o://88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.774095 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.777627 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-acl-logging/0.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.778180 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-controller/0.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.778810 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843577 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jqpfs"] Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843804 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843819 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843829 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843838 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843858 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843866 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843874 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843882 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843890 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843902 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843909 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843921 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843928 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843936 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843943 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843953 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844139 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844158 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844166 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844179 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kubecfg-setup" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844186 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kubecfg-setup" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844203 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844311 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844323 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844331 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844342 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844364 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844374 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844387 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844398 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844409 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844416 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844520 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844530 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844543 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844550 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844666 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844888 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.846487 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.932921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933006 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933043 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933103 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933204 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933235 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933349 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933373 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933403 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933448 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933505 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933550 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933613 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket" (OuterVolumeSpecName: "log-socket") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933710 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933651 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933671 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log" (OuterVolumeSpecName: "node-log") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933689 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933758 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash" (OuterVolumeSpecName: "host-slash") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933851 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933908 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934199 5039 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934218 5039 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934222 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934231 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934229 5039 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934268 5039 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934291 5039 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934304 5039 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934315 5039 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934326 5039 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934337 5039 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934349 5039 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934360 5039 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934372 5039 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934254 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934383 5039 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934409 5039 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.939555 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.940507 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz" (OuterVolumeSpecName: "kube-api-access-x8ztz") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "kube-api-access-x8ztz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.947212 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035740 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035802 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035824 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035841 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035957 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035979 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036007 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036420 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036478 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036517 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036659 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036718 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036848 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036878 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037129 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037158 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037172 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037185 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037196 5039 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037205 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138754 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138842 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138892 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138990 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139067 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139167 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139226 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139305 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139278 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139300 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139270 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139340 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139463 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139566 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139761 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139818 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139834 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139920 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139927 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140107 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140162 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140265 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140324 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140331 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140358 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.144127 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.158863 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.161313 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: W0130 13:17:01.196964 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod415da7b1_40a2_4d99_8945_8d4bb54ca33e.slice/crio-35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b WatchSource:0}: Error finding container 35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b: Status 404 returned error can't find the container with id 35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.233911 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/2.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234681 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234804 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5" exitCode=2 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234978 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.235092 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.235840 5039 scope.go:117] "RemoveContainer" containerID="8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.238868 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.242252 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-acl-logging/0.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243496 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-controller/0.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243846 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243882 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243896 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243906 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243917 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243926 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243934 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" exitCode=143 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243943 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" exitCode=143 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243999 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244105 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244144 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244156 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244164 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244171 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244180 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244187 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244193 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244200 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244207 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244213 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244234 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244242 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244249 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244256 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244265 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244272 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244279 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244287 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244294 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244300 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244310 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244320 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244328 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244336 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244343 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244350 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244356 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244363 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244370 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244376 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244385 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244394 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244406 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244416 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244424 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244432 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244441 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244447 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244454 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244461 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244467 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244475 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244577 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.249260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.309686 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.314941 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.318051 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.334186 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.352660 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.368803 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.390387 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.466687 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.481684 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.502371 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.516067 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.531486 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.552708 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553253 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553291 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553316 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553647 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553672 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553687 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554003 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554045 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554331 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554369 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554388 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554674 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554708 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554730 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554997 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555040 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555246 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555277 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555300 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555508 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555540 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555594 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555815 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555848 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555868 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.556162 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556299 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556597 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556624 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556883 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556905 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557166 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557190 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557402 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557431 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557680 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557710 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558297 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558334 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558583 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558612 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558865 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558895 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559222 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559252 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559846 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559877 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560379 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560411 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560652 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560684 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560931 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560962 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561412 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561437 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561949 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561984 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562262 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562292 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562582 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562612 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562902 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562932 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563291 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563323 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563571 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563600 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563843 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563873 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564167 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564202 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564507 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564540 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564828 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564859 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565134 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565165 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565376 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565403 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565629 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565656 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565856 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565884 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566132 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566161 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566409 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.115100 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" path="/var/lib/kubelet/pods/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/volumes" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.259783 5039 generic.go:334] "Generic (PLEG): container finished" podID="415da7b1-40a2-4d99-8945-8d4bb54ca33e" containerID="97ea175cbdc2d82a0bba6de6539afbd3aaafa41cdf9f066d677c146a1f0b18df" exitCode=0 Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.259921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerDied","Data":"97ea175cbdc2d82a0bba6de6539afbd3aaafa41cdf9f066d677c146a1f0b18df"} Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.263751 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/2.log" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.263969 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"e7d798c535c5881040086e11187aeac8638bab3a1e2f173d36ad73d081fd0b26"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.275589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"3b90c7e0ac495369ff4b85b16dff9b5f99449b4f6153cb1987d3ff736c5f78c2"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276001 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"cc6aaf4dbaecccfb5789551c4e60491fec3ec2f2dd21caaa78b76ae5d057bbc2"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"2ed92ff7f26630f97c790fc2afda7ee54b6cfb8167ac68bd5430a8228ed03a87"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276086 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"e18c0fb871664d71be7b9bd5f099b8de097f170a29d04b11b8477bf013318935"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276100 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"1c2d9cab50f93e979d9b36905d91db64ae42c7c4b77fdd5d39734495424e1967"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"662d20e31e8fc18e48bd35ca7cb5d8a8929f3429b39564bf800d52e78617ba94"} Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.292634 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"cefb5db037c0b9d0bf4998649ed5df0101caa722fd0a28a951a33bbcf3b93815"} Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.383817 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.384843 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.386693 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.386720 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.387545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.387896 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392492 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392526 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494201 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494536 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.495105 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.519163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.703376 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727173 5039 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727317 5039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727469 5039 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727548 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-8p9ft" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" Jan 30 13:17:07 crc kubenswrapper[5039]: I0130 13:17:07.742475 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:17:07 crc kubenswrapper[5039]: I0130 13:17:07.742840 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.210961 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.211150 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.211613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244447 5039 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244888 5039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244934 5039 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.245001 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-8p9ft" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"8c54eab62cea87d23c2936bc0483cce8707caf9c8b91ff98813df72d550a5899"} Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397656 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397708 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.432740 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.432829 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" podStartSLOduration=8.432812005 podStartE2EDuration="8.432812005s" podCreationTimestamp="2026-01-30 13:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:17:08.431442727 +0000 UTC m=+793.092123964" watchObservedRunningTime="2026-01-30 13:17:08.432812005 +0000 UTC m=+793.093493232" Jan 30 13:17:09 crc kubenswrapper[5039]: I0130 13:17:09.402402 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:09 crc kubenswrapper[5039]: I0130 13:17:09.426996 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.093130 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.094208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.290498 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:19 crc kubenswrapper[5039]: W0130 13:17:19.295316 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a676a4d_a7f1_4312_9c94_3a548ecf60fe.slice/crio-608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc WatchSource:0}: Error finding container 608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc: Status 404 returned error can't find the container with id 608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.297299 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.454694 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerStarted","Data":"608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc"} Jan 30 13:17:22 crc kubenswrapper[5039]: I0130 13:17:22.470052 5039 generic.go:334] "Generic (PLEG): container finished" podID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerID="57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164" exitCode=0 Jan 30 13:17:22 crc kubenswrapper[5039]: I0130 13:17:22.470149 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerDied","Data":"57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164"} Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.722897 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884556 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884651 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884786 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.885160 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.892255 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c" (OuterVolumeSpecName: "kube-api-access-mbg8c") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "kube-api-access-mbg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.899832 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.986724 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.986771 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484796 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerDied","Data":"608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc"} Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484889 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc" Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484849 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.905262 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:30 crc kubenswrapper[5039]: E0130 13:17:30.906078 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906094 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906198 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906998 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.909872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.921864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078065 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078217 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180756 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.181260 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.183532 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.214457 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.228898 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.441411 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.541960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerStarted","Data":"32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6"} Jan 30 13:17:32 crc kubenswrapper[5039]: I0130 13:17:32.547411 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="bd2dd021d0c34aff26e5dadc1d92fdf4a751c58ec25ff7d949496beb44bea277" exitCode=0 Jan 30 13:17:32 crc kubenswrapper[5039]: I0130 13:17:32.547450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"bd2dd021d0c34aff26e5dadc1d92fdf4a751c58ec25ff7d949496beb44bea277"} Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.267808 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.269392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.287852 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510440 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510496 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510530 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.511246 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.511432 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.536335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.587021 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.983118 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: W0130 13:17:33.989060 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9352f658_903f_48dc_8f81_30f357eae6c0.slice/crio-2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac WatchSource:0}: Error finding container 2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac: Status 404 returned error can't find the container with id 2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.557987 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250" exitCode=0 Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.558070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250"} Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.558355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac"} Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.560516 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="5ec8d01f176ba4b740aba20b1f25e5fb6f9b6ca89131398875c847414fecbea0" exitCode=0 Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.560563 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"5ec8d01f176ba4b740aba20b1f25e5fb6f9b6ca89131398875c847414fecbea0"} Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.570475 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="69aadd293c95ccd883eb581562d144e4f9b32be5a60e58d510b080bcf15369d3" exitCode=0 Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.570773 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"69aadd293c95ccd883eb581562d144e4f9b32be5a60e58d510b080bcf15369d3"} Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.573949 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129"} Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.590579 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129" exitCode=0 Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.590892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129"} Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.879544 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.055891 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056002 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056397 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle" (OuterVolumeSpecName: "bundle") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056607 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.066386 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs" (OuterVolumeSpecName: "kube-api-access-8rdhs") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "kube-api-access-8rdhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.074528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util" (OuterVolumeSpecName: "util") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.157991 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.158039 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.600298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269"} Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.604931 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6"} Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.604971 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.605068 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.620832 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwcgm" podStartSLOduration=2.114775681 podStartE2EDuration="4.620810579s" podCreationTimestamp="2026-01-30 13:17:33 +0000 UTC" firstStartedPulling="2026-01-30 13:17:34.55966688 +0000 UTC m=+819.220348117" lastFinishedPulling="2026-01-30 13:17:37.065701768 +0000 UTC m=+821.726383015" observedRunningTime="2026-01-30 13:17:37.61641723 +0000 UTC m=+822.277098477" watchObservedRunningTime="2026-01-30 13:17:37.620810579 +0000 UTC m=+822.281491816" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.742828 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.742902 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.398556 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399054 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="util" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399067 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="util" Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399080 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399086 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399096 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="pull" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399102 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="pull" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399196 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399551 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.401568 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-p956v" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.402355 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.403809 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.451900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.525111 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.625775 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.663633 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.713459 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.973494 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:42 crc kubenswrapper[5039]: I0130 13:17:42.630040 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" event={"ID":"c4341387-fba2-41e9-a279-5c1071b11a2d","Type":"ContainerStarted","Data":"cf63c8477d2bbfae5a530f6a2480b8585b0fa23bb6ba3b956e665e0714b370f0"} Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.587527 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.587614 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.634034 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.680293 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:44 crc kubenswrapper[5039]: I0130 13:17:44.645293 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" event={"ID":"c4341387-fba2-41e9-a279-5c1071b11a2d","Type":"ContainerStarted","Data":"2718f468696b262cc9b806e5b410959eb6a5887952ffd41e4b3525ee6fa32086"} Jan 30 13:17:44 crc kubenswrapper[5039]: I0130 13:17:44.662915 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" podStartSLOduration=1.495174663 podStartE2EDuration="3.662895416s" podCreationTimestamp="2026-01-30 13:17:41 +0000 UTC" firstStartedPulling="2026-01-30 13:17:41.985798214 +0000 UTC m=+826.646479441" lastFinishedPulling="2026-01-30 13:17:44.153518967 +0000 UTC m=+828.814200194" observedRunningTime="2026-01-30 13:17:44.657809068 +0000 UTC m=+829.318490315" watchObservedRunningTime="2026-01-30 13:17:44.662895416 +0000 UTC m=+829.323576643" Jan 30 13:17:46 crc kubenswrapper[5039]: I0130 13:17:46.061449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:46 crc kubenswrapper[5039]: I0130 13:17:46.654865 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwcgm" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" containerID="cri-o://25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" gracePeriod=2 Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.663417 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" exitCode=0 Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.663452 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269"} Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.724123 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803404 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803537 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.804583 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities" (OuterVolumeSpecName: "utilities") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.809961 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z" (OuterVolumeSpecName: "kube-api-access-8hm6z") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "kube-api-access-8hm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.905133 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.905363 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.946083 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.007202 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675139 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac"} Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675542 5039 scope.go:117] "RemoveContainer" containerID="25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675254 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.705640 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.707829 5039 scope.go:117] "RemoveContainer" containerID="89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.709703 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.731953 5039 scope.go:117] "RemoveContainer" containerID="25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.103830 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" path="/var/lib/kubelet/pods/9352f658-903f-48dc-8f81-30f357eae6c0/volumes" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845163 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845401 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-utilities" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845416 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-utilities" Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845433 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-content" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845441 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-content" Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845462 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845551 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.846079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.847964 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8jbgv" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.859613 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.860451 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.862705 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.872735 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.879452 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5ccgw"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.880290 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.889230 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943942 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.967932 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.968765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.971909 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.972154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lfbqr" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.972377 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.010824 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045005 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045097 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045194 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045239 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045274 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045351 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045378 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045402 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045465 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: E0130 13:17:51.045607 5039 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 13:17:51 crc kubenswrapper[5039]: E0130 13:17:51.045673 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair podName:b8b725bf-ea88-45d2-a03b-94c281cc3842 nodeName:}" failed. No retries permitted until 2026-01-30 13:17:51.545654132 +0000 UTC m=+836.206335359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-8jq59" (UID: "b8b725bf-ea88-45d2-a03b-94c281cc3842") : secret "openshift-nmstate-webhook" not found Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.072521 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.075198 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147075 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147584 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147823 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.159639 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.162603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.171233 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.188987 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.197174 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.243231 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98342032_bce0_478a_b809_b9af50125cbf.slice/crio-4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455 WatchSource:0}: Error finding container 4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455: Status 404 returned error can't find the container with id 4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.248004 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.249200 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.266065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.286289 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.349664 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.349970 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350120 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.451881 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.451968 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452043 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452113 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452152 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452181 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.452536 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05349ae8_13b7_45d0_beb2_5a14eeae995f.slice/crio-ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727 WatchSource:0}: Error finding container ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727: Status 404 returned error can't find the container with id ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.453047 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.453371 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.454704 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.454896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.455119 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.458760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.459348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.474105 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.508181 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.514933 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5306d4b9_35eb_45b6_b2d5_3ab361b8bcb9.slice/crio-25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0 WatchSource:0}: Error finding container 25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0: Status 404 returned error can't find the container with id 25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.553499 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.556304 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.576535 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.692312 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.693209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5ccgw" event={"ID":"98342032-bce0-478a-b809-b9af50125cbf","Type":"ContainerStarted","Data":"4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.694089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" event={"ID":"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9","Type":"ContainerStarted","Data":"25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.742221 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.749310 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod280101ef_77c9_4c4a_b0a2_e989319100f5.slice/crio-6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343 WatchSource:0}: Error finding container 6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343: Status 404 returned error can't find the container with id 6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.782800 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.006364 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:52 crc kubenswrapper[5039]: W0130 13:17:52.011909 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b725bf_ea88_45d2_a03b_94c281cc3842.slice/crio-df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4 WatchSource:0}: Error finding container df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4: Status 404 returned error can't find the container with id df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4 Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.702132 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d449f8d68-n5vvc" event={"ID":"280101ef-77c9-4c4a-b0a2-e989319100f5","Type":"ContainerStarted","Data":"581f143ba765a6ac6ac5f0271c59f647e9508fcb937d4b9b31d90cc7ad50a29e"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.702189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d449f8d68-n5vvc" event={"ID":"280101ef-77c9-4c4a-b0a2-e989319100f5","Type":"ContainerStarted","Data":"6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.706485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" event={"ID":"b8b725bf-ea88-45d2-a03b-94c281cc3842","Type":"ContainerStarted","Data":"df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.730905 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d449f8d68-n5vvc" podStartSLOduration=1.730880854 podStartE2EDuration="1.730880854s" podCreationTimestamp="2026-01-30 13:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:17:52.723506263 +0000 UTC m=+837.384187530" watchObservedRunningTime="2026-01-30 13:17:52.730880854 +0000 UTC m=+837.391562091" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.721497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5ccgw" event={"ID":"98342032-bce0-478a-b809-b9af50125cbf","Type":"ContainerStarted","Data":"eaea91d9bebd8966fe3ec807e82fd9599d86b7099f82b82e9df63d91de394dc9"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.722126 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.723297 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" event={"ID":"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9","Type":"ContainerStarted","Data":"c7b189bf18118999cec0f2479a5cbfd478e09b4cf31ccf70386ffdb079a2fa99"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.725185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"f3a7d9bbdef3d6defb7703b59ca67ab3bad8522aa8acd4cf27a4c81162db1077"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.726939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" event={"ID":"b8b725bf-ea88-45d2-a03b-94c281cc3842","Type":"ContainerStarted","Data":"a3c4eb1eca517a8aa3dcfdac991dc7c4f6fd01ccad07fa00583f4ce7c77ae57a"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.727179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.736158 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5ccgw" podStartSLOduration=1.628928017 podStartE2EDuration="4.736139255s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.266748564 +0000 UTC m=+835.927429801" lastFinishedPulling="2026-01-30 13:17:54.373959792 +0000 UTC m=+839.034641039" observedRunningTime="2026-01-30 13:17:54.735387245 +0000 UTC m=+839.396068482" watchObservedRunningTime="2026-01-30 13:17:54.736139255 +0000 UTC m=+839.396820492" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.750535 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" podStartSLOduration=1.912252188 podStartE2EDuration="4.750511085s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.517368458 +0000 UTC m=+836.178049695" lastFinishedPulling="2026-01-30 13:17:54.355627335 +0000 UTC m=+839.016308592" observedRunningTime="2026-01-30 13:17:54.747450382 +0000 UTC m=+839.408131629" watchObservedRunningTime="2026-01-30 13:17:54.750511085 +0000 UTC m=+839.411192312" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.781358 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" podStartSLOduration=2.43423276 podStartE2EDuration="4.781336052s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:52.02756985 +0000 UTC m=+836.688251077" lastFinishedPulling="2026-01-30 13:17:54.374673142 +0000 UTC m=+839.035354369" observedRunningTime="2026-01-30 13:17:54.777090867 +0000 UTC m=+839.437772114" watchObservedRunningTime="2026-01-30 13:17:54.781336052 +0000 UTC m=+839.442017289" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.228271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.577427 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.578153 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.582390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.774876 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.822256 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:05 crc kubenswrapper[5039]: I0130 13:18:05.799914 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"8e272b85be85700a131da59bca48d7e8c363b12b368505e680e37ac8d76c042f"} Jan 30 13:18:05 crc kubenswrapper[5039]: I0130 13:18:05.828418 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" podStartSLOduration=2.6257586760000002 podStartE2EDuration="15.828389639s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.45703562 +0000 UTC m=+836.117716847" lastFinishedPulling="2026-01-30 13:18:04.659666583 +0000 UTC m=+849.320347810" observedRunningTime="2026-01-30 13:18:05.826388866 +0000 UTC m=+850.487070103" watchObservedRunningTime="2026-01-30 13:18:05.828389639 +0000 UTC m=+850.489070896" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.742817 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.743263 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.743328 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.744209 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.744305 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" gracePeriod=600 Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.823655 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" exitCode=0 Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.823765 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.824150 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.824180 5039 scope.go:117] "RemoveContainer" containerID="560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" Jan 30 13:18:11 crc kubenswrapper[5039]: I0130 13:18:11.791744 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.808592 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.810443 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.817331 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.821563 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881498 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881534 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.982943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983007 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983441 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983450 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.009867 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.164825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.622263 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:25 crc kubenswrapper[5039]: W0130 13:18:25.629920 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d9f5fc_68a0_4b15_83ec_e6c186ac4714.slice/crio-3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1 WatchSource:0}: Error finding container 3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1: Status 404 returned error can't find the container with id 3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1 Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945389 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="a84e1df57a9eb4c0a5820e28e8afcd956d64e659589cc77938234ebd26e32b86" exitCode=0 Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"a84e1df57a9eb4c0a5820e28e8afcd956d64e659589cc77938234ebd26e32b86"} Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945815 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerStarted","Data":"3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1"} Jan 30 13:18:26 crc kubenswrapper[5039]: I0130 13:18:26.862558 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" containerID="cri-o://d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" gracePeriod=15 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.278983 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2cmnb_c8a9040d-c9a7-48df-a786-0079713a7cdc/console/0.log" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.279300 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416719 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416867 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416895 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416937 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416968 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.417656 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.417671 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca" (OuterVolumeSpecName: "service-ca") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.418046 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.418519 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config" (OuterVolumeSpecName: "console-config") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.425490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.426330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.426603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf" (OuterVolumeSpecName: "kube-api-access-mjqgf") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "kube-api-access-mjqgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518041 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518077 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518092 5039 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518102 5039 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518114 5039 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518124 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518135 5039 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.963289 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="e00372cd10d989cf9737c57834e2bff9dc2d40a19ef04fe96f8ae392a11883b0" exitCode=0 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.963359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"e00372cd10d989cf9737c57834e2bff9dc2d40a19ef04fe96f8ae392a11883b0"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965595 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2cmnb_c8a9040d-c9a7-48df-a786-0079713a7cdc/console/0.log" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965645 5039 generic.go:334] "Generic (PLEG): container finished" podID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" exitCode=2 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerDied","Data":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerDied","Data":"3e681b456647afe2d34de10f3608b1ac9a943d78d3dadd258eb17cf318629b2a"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965730 5039 scope.go:117] "RemoveContainer" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.995680 5039 scope.go:117] "RemoveContainer" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:28 crc kubenswrapper[5039]: E0130 13:18:28.001081 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": container with ID starting with d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d not found: ID does not exist" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.001141 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} err="failed to get container status \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": rpc error: code = NotFound desc = could not find container \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": container with ID starting with d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d not found: ID does not exist" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.016863 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.027051 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.110231 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" path="/var/lib/kubelet/pods/c8a9040d-c9a7-48df-a786-0079713a7cdc/volumes" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.980929 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="b8dd7e63da83feb17278987b3c49067bc507b7e2ba0a5c64cc10625dd8e606a2" exitCode=0 Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.980993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"b8dd7e63da83feb17278987b3c49067bc507b7e2ba0a5c64cc10625dd8e606a2"} Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.282515 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466660 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466829 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466882 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.468402 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle" (OuterVolumeSpecName: "bundle") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.484172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2" (OuterVolumeSpecName: "kube-api-access-g54b2") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "kube-api-access-g54b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.513727 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util" (OuterVolumeSpecName: "util") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568461 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568486 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568496 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.000685 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1"} Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.001131 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1" Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.000761 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.990099 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991238 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991258 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991289 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="pull" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991297 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="pull" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991314 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991324 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991346 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="util" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991354 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="util" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991565 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991579 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.992314 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.994671 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.994865 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tvjsd" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.995119 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.995277 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.002493 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.017707 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091029 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091081 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091121 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.192173 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.193169 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.193212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.198702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.199194 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.220438 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.316082 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.329873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.330611 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.334515 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.334624 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.349361 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.349499 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mmhkg" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395093 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395207 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395245 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.496717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.497026 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.497065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.508833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.513500 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.514263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.709956 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.803116 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:40 crc kubenswrapper[5039]: W0130 13:18:40.820427 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ada733_5dd5_4176_a550_55b719e60a27.slice/crio-20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e WatchSource:0}: Error finding container 20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e: Status 404 returned error can't find the container with id 20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.927055 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: W0130 13:18:40.933478 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9615eef8_e393_477f_b76f_d8219f085358.slice/crio-ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a WatchSource:0}: Error finding container ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a: Status 404 returned error can't find the container with id ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a Jan 30 13:18:41 crc kubenswrapper[5039]: I0130 13:18:41.404397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" event={"ID":"34ada733-5dd5-4176-a550-55b719e60a27","Type":"ContainerStarted","Data":"20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e"} Jan 30 13:18:41 crc kubenswrapper[5039]: I0130 13:18:41.405495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" event={"ID":"9615eef8-e393-477f-b76f-d8219f085358","Type":"ContainerStarted","Data":"ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a"} Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.446294 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" event={"ID":"34ada733-5dd5-4176-a550-55b719e60a27","Type":"ContainerStarted","Data":"6fedd81637b9df81453d9b122778b80faea84e3847370b7200913f28cd2dd2eb"} Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.447613 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.477339 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" podStartSLOduration=2.915685238 podStartE2EDuration="5.477319807s" podCreationTimestamp="2026-01-30 13:18:39 +0000 UTC" firstStartedPulling="2026-01-30 13:18:40.823919201 +0000 UTC m=+885.484600418" lastFinishedPulling="2026-01-30 13:18:43.38555376 +0000 UTC m=+888.046234987" observedRunningTime="2026-01-30 13:18:44.472715363 +0000 UTC m=+889.133396610" watchObservedRunningTime="2026-01-30 13:18:44.477319807 +0000 UTC m=+889.138001044" Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.453627 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" event={"ID":"9615eef8-e393-477f-b76f-d8219f085358","Type":"ContainerStarted","Data":"59f6b2cda9e24a83c2ff38fef5938cbc404b789768a50d8c0cb13ba2e1e2dc38"} Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.453963 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.479812 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" podStartSLOduration=1.349305137 podStartE2EDuration="5.479781652s" podCreationTimestamp="2026-01-30 13:18:40 +0000 UTC" firstStartedPulling="2026-01-30 13:18:40.936553809 +0000 UTC m=+885.597235036" lastFinishedPulling="2026-01-30 13:18:45.067030324 +0000 UTC m=+889.727711551" observedRunningTime="2026-01-30 13:18:45.47188813 +0000 UTC m=+890.132569397" watchObservedRunningTime="2026-01-30 13:18:45.479781652 +0000 UTC m=+890.140462919" Jan 30 13:19:00 crc kubenswrapper[5039]: I0130 13:19:00.718704 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:19:20 crc kubenswrapper[5039]: I0130 13:19:20.319212 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.098353 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sgnsl"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.101339 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.103039 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.103591 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6j9wd" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.105636 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.110620 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.112202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.113926 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.121929 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179678 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179815 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179848 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179956 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.180055 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.180119 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.202393 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g8kqw"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.204008 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.207614 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210290 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gdrhs" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210514 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210750 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.244274 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.245361 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.258002 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.271261 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282537 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282607 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282711 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282872 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282926 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282971 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.283662 5039 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.283703 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs podName:efd80df6-f7ef-4379-b160-9a38ca228667 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.78368813 +0000 UTC m=+926.444369357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs") pod "frr-k8s-sgnsl" (UID: "efd80df6-f7ef-4379-b160-9a38ca228667") : secret "frr-k8s-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.284181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.286997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.293376 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.321686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.326737 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383907 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383999 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384058 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384081 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384129 5039 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384188 5039 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384199 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.884180819 +0000 UTC m=+926.544862046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "speaker-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384215 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.88420602 +0000 UTC m=+926.544887247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.388814 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.389510 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.398584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.400200 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.407760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.430928 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.562286 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.643462 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.696612 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" event={"ID":"1fe909fe-e213-4165-83d5-c84a38f84047","Type":"ContainerStarted","Data":"6fe67ed649fd7f8a77cad34ad869bf6154ba1de6b2c40927900f35dfababb47d"} Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.741141 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: W0130 13:19:21.743876 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18c97a9f_5ac7_4319_8909_600474d0aabc.slice/crio-43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b WatchSource:0}: Error finding container 43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b: Status 404 returned error can't find the container with id 43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.788756 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.794604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.891407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.891580 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.891887 5039 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.891977 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:22.891947733 +0000 UTC m=+927.552629000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.899551 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.024225 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.704304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"6ec92c380786f458e0355c2616fd07551e06343beafffbed675256f19e5b4fc6"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"f4a5d4025cd6beba0438f8c7c4c2b9ea9a9b47ee3a82fe1a25a1e05a3d0ea781"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706949 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"df2aad981515f1b094cb1b464b8d212166a3334b22ba0cb9f20018ac1fa4055f"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706961 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.707188 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.731114 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-msg56" podStartSLOduration=1.731089627 podStartE2EDuration="1.731089627s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:19:22.728517187 +0000 UTC m=+927.389198484" watchObservedRunningTime="2026-01-30 13:19:22.731089627 +0000 UTC m=+927.391770914" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.903508 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.923661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.022339 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.722770 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"3c627a08cd935776299c26c8776c3ef2ea2090e7be4bf5e3c5511f39485952be"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723163 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"73cbdb5a962e79f48883f760637ac733fc6fc9ecd4cef79119fb686091b18b4f"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723183 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"ba31f89736f2aa0d12ed2701cd086bc6b6af59469a447129e18d34b7a7238d4a"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723335 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.767422 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g8kqw" podStartSLOduration=2.767402475 podStartE2EDuration="2.767402475s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:19:23.764831165 +0000 UTC m=+928.425512412" watchObservedRunningTime="2026-01-30 13:19:23.767402475 +0000 UTC m=+928.428083702" Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.771542 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="73935ae12d702dcf13ce8d22a46fbc79825e07716ad8b77ffb4ee345f931eddc" exitCode=0 Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.771632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"73935ae12d702dcf13ce8d22a46fbc79825e07716ad8b77ffb4ee345f931eddc"} Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.775874 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" event={"ID":"1fe909fe-e213-4165-83d5-c84a38f84047","Type":"ContainerStarted","Data":"41d730fa59afaa0426637cd6cc5c13aaf5d1d1b0af093906357a30a28a2d909a"} Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.776213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.845476 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" podStartSLOduration=1.538837915 podStartE2EDuration="8.845452896s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="2026-01-30 13:19:21.656126882 +0000 UTC m=+926.316808109" lastFinishedPulling="2026-01-30 13:19:28.962741863 +0000 UTC m=+933.623423090" observedRunningTime="2026-01-30 13:19:29.84299782 +0000 UTC m=+934.503679077" watchObservedRunningTime="2026-01-30 13:19:29.845452896 +0000 UTC m=+934.506134163" Jan 30 13:19:30 crc kubenswrapper[5039]: I0130 13:19:30.786904 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="cb16b36ccfcebb82dde94fd88a08938449af2d6d3e742dcf374f519f302d9dd3" exitCode=0 Jan 30 13:19:30 crc kubenswrapper[5039]: I0130 13:19:30.787125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"cb16b36ccfcebb82dde94fd88a08938449af2d6d3e742dcf374f519f302d9dd3"} Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.568876 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.795548 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="05e6400917138290b291bab0a35598a8480838c02dc4e13769d719ac7dd32e16" exitCode=0 Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.795602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"05e6400917138290b291bab0a35598a8480838c02dc4e13769d719ac7dd32e16"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.808595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"08122bfcf26f145104c44cb6b6c63e7f746a2498fc3046a34abacef1989e5589"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809071 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809084 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"e97a5041f116d33a46135a779711c94be17cffe963e06d9e265865ad4a7f8e5b"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"19804e8b60e1d2c2d90c5711931c3db6264347dcca27a9977d7eaee3077de0c8"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"b7955478f584cd608d8a7b5f5c1db6a2c36c3344d58c6038eb6745d0e9ffe9d5"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809118 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"fbb8cc6ead8bcd5b1a8604ff4068b54ab0fd5ded4616789a33ef00ccbfed2cff"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809129 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"5129298f28a898e82c3833dfc13db8e263502d794f3feae8269266a16156ee7a"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.830701 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sgnsl" podStartSLOduration=5.0009239 podStartE2EDuration="11.830670137s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="2026-01-30 13:19:22.137872567 +0000 UTC m=+926.798553794" lastFinishedPulling="2026-01-30 13:19:28.967618804 +0000 UTC m=+933.628300031" observedRunningTime="2026-01-30 13:19:32.826036373 +0000 UTC m=+937.486717620" watchObservedRunningTime="2026-01-30 13:19:32.830670137 +0000 UTC m=+937.491351404" Jan 30 13:19:33 crc kubenswrapper[5039]: I0130 13:19:33.027264 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.485438 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.487694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.490059 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.494177 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683707 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683858 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785081 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785186 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785879 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.786042 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.808858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.106825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.386006 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:35 crc kubenswrapper[5039]: W0130 13:19:35.388402 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfefedf33_4c19_4945_b31f_75e19fea3dff.slice/crio-a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f WatchSource:0}: Error finding container a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f: Status 404 returned error can't find the container with id a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829659 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="191d0688b308ad8dfc0a341b7c53c6bb86149f16ecbcc8b65dcafa14508ed93d" exitCode=0 Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"191d0688b308ad8dfc0a341b7c53c6bb86149f16ecbcc8b65dcafa14508ed93d"} Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerStarted","Data":"a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f"} Jan 30 13:19:37 crc kubenswrapper[5039]: I0130 13:19:37.024897 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:37 crc kubenswrapper[5039]: I0130 13:19:37.076805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:39 crc kubenswrapper[5039]: I0130 13:19:39.855540 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="66c71485af1ff5c30502b40d17741e6b26adaa407d78570198455aaeb412d06d" exitCode=0 Jan 30 13:19:39 crc kubenswrapper[5039]: I0130 13:19:39.855616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"66c71485af1ff5c30502b40d17741e6b26adaa407d78570198455aaeb412d06d"} Jan 30 13:19:40 crc kubenswrapper[5039]: I0130 13:19:40.865854 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="619b7e01554e8ca32f4cc55957e23faebd8a5e7246aeaea7f999961f149dfdd3" exitCode=0 Jan 30 13:19:40 crc kubenswrapper[5039]: I0130 13:19:40.865900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"619b7e01554e8ca32f4cc55957e23faebd8a5e7246aeaea7f999961f149dfdd3"} Jan 30 13:19:41 crc kubenswrapper[5039]: I0130 13:19:41.436932 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.029658 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.160687 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.186120 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.186184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.193309 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm" (OuterVolumeSpecName: "kube-api-access-r4bmm") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "kube-api-access-r4bmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.196043 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util" (OuterVolumeSpecName: "util") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.287851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.289156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle" (OuterVolumeSpecName: "bundle") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.290586 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.290629 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.392727 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f"} Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880251 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.450072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451036 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="util" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="util" Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451072 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="pull" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451082 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="pull" Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451100 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451111 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451308 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451818 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453674 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453742 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453754 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-vxzx7" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.466960 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.566132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.566241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.669883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.670046 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.670899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.697872 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.773791 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:49 crc kubenswrapper[5039]: I0130 13:19:49.199890 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:49 crc kubenswrapper[5039]: I0130 13:19:49.922740 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" event={"ID":"8dbf17d5-0b7e-492d-b613-a7900d36fad8","Type":"ContainerStarted","Data":"35de2a74a469f39b06a04d26b88d4e6d404194904bf239c842f8049aa157d376"} Jan 30 13:19:52 crc kubenswrapper[5039]: I0130 13:19:52.948064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" event={"ID":"8dbf17d5-0b7e-492d-b613-a7900d36fad8","Type":"ContainerStarted","Data":"e299301fbd9937c279ae2c69038c782d66495a989ea60a34a03ca239d3385be4"} Jan 30 13:19:52 crc kubenswrapper[5039]: I0130 13:19:52.984751 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" podStartSLOduration=1.938729588 podStartE2EDuration="4.984728311s" podCreationTimestamp="2026-01-30 13:19:48 +0000 UTC" firstStartedPulling="2026-01-30 13:19:49.204568194 +0000 UTC m=+953.865249421" lastFinishedPulling="2026-01-30 13:19:52.250566917 +0000 UTC m=+956.911248144" observedRunningTime="2026-01-30 13:19:52.983130848 +0000 UTC m=+957.643812125" watchObservedRunningTime="2026-01-30 13:19:52.984728311 +0000 UTC m=+957.645409548" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.589866 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.591334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.594208 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.594475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-62qpr" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.595160 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.605659 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.676416 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.676552 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.777469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.777817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.801565 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.803212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.905316 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:57 crc kubenswrapper[5039]: W0130 13:19:57.367384 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf4f279_399b_4958_9a67_3a94b650bd98.slice/crio-caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9 WatchSource:0}: Error finding container caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9: Status 404 returned error can't find the container with id caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9 Jan 30 13:19:57 crc kubenswrapper[5039]: I0130 13:19:57.371637 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:57 crc kubenswrapper[5039]: I0130 13:19:57.977727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" event={"ID":"faf4f279-399b-4958-9a67-3a94b650bd98","Type":"ContainerStarted","Data":"caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9"} Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.238873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.239839 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.241901 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l7jnc" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.254464 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.415090 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.415176 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.516670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.516725 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.534118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.543612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.563362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:20:00 crc kubenswrapper[5039]: I0130 13:20:00.004387 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:20:01 crc kubenswrapper[5039]: I0130 13:20:01.000712 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" event={"ID":"99b483cf-ff93-4073-a80d-b5da5ebfd409","Type":"ContainerStarted","Data":"f4915017309582c8103906dab2cf53e9776201aa04908468c8e53bdcceb3e22d"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.035937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" event={"ID":"faf4f279-399b-4958-9a67-3a94b650bd98","Type":"ContainerStarted","Data":"43948ccabfb3a4dc73f7b36389ca3b39ca6348f70eac7fd6a78d7859846ff289"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.037373 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.038093 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" event={"ID":"99b483cf-ff93-4073-a80d-b5da5ebfd409","Type":"ContainerStarted","Data":"8396edd1aa13df419263727cad71de9bb5624ff7e097cea02a16d6bf5fad48bc"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.057820 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" podStartSLOduration=1.715604469 podStartE2EDuration="10.057801278s" podCreationTimestamp="2026-01-30 13:19:56 +0000 UTC" firstStartedPulling="2026-01-30 13:19:57.373579114 +0000 UTC m=+962.034260341" lastFinishedPulling="2026-01-30 13:20:05.715775913 +0000 UTC m=+970.376457150" observedRunningTime="2026-01-30 13:20:06.053757179 +0000 UTC m=+970.714438416" watchObservedRunningTime="2026-01-30 13:20:06.057801278 +0000 UTC m=+970.718482515" Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.085197 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" podStartSLOduration=1.389186391 podStartE2EDuration="7.085172213s" podCreationTimestamp="2026-01-30 13:19:59 +0000 UTC" firstStartedPulling="2026-01-30 13:20:00.017746736 +0000 UTC m=+964.678427993" lastFinishedPulling="2026-01-30 13:20:05.713732578 +0000 UTC m=+970.374413815" observedRunningTime="2026-01-30 13:20:06.075979376 +0000 UTC m=+970.736660613" watchObservedRunningTime="2026-01-30 13:20:06.085172213 +0000 UTC m=+970.745853460" Jan 30 13:20:11 crc kubenswrapper[5039]: I0130 13:20:11.908596 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.416073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.418112 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.420869 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5xf6n" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.425686 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.451306 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.451493 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.553244 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.553334 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.581788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.583353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.741323 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:16 crc kubenswrapper[5039]: I0130 13:20:16.179913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.113224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r4tn9" event={"ID":"2ec608ca-f1e5-4db3-9c30-c4eda5016097","Type":"ContainerStarted","Data":"36840d6badbc8b7122c8718401e1e7625ab05066be2c5025fe3b88f610d3df8d"} Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.113574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r4tn9" event={"ID":"2ec608ca-f1e5-4db3-9c30-c4eda5016097","Type":"ContainerStarted","Data":"80c0ed134e17a3c94e11db6c4a378aaf8de0f29b45ef68dc22f80dd89d5c21c2"} Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.133344 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-r4tn9" podStartSLOduration=2.133326125 podStartE2EDuration="2.133326125s" podCreationTimestamp="2026-01-30 13:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:20:17.127974381 +0000 UTC m=+981.788655628" watchObservedRunningTime="2026-01-30 13:20:17.133326125 +0000 UTC m=+981.794007352" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.765069 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.767742 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.791513 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.841938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.842156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.842243 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945436 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945500 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945545 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.946157 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.946247 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.982877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:25 crc kubenswrapper[5039]: I0130 13:20:25.091755 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:25 crc kubenswrapper[5039]: I0130 13:20:25.542947 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365262 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea" exitCode=0 Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea"} Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerStarted","Data":"697f98cb1d0856ddbf8fb8218d59ab1ff83628e4f4bf489087cadd43f7d1baf0"} Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.382740 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b" exitCode=0 Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.382945 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b"} Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.741678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.742529 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.745518 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.745953 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hjs5x" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.747773 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.762313 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.903591 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.005104 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.022311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.066776 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.789828 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:29 crc kubenswrapper[5039]: W0130 13:20:29.800214 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc67884_3169_4fc2_98e9_1a3a274f9f02.slice/crio-774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730 WatchSource:0}: Error finding container 774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730: Status 404 returned error can't find the container with id 774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730 Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.408615 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-np244" event={"ID":"9fc67884-3169-4fc2-98e9-1a3a274f9f02","Type":"ContainerStarted","Data":"774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730"} Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.410795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerStarted","Data":"a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279"} Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.433788 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffqhl" podStartSLOduration=3.473587768 podStartE2EDuration="6.433754908s" podCreationTimestamp="2026-01-30 13:20:24 +0000 UTC" firstStartedPulling="2026-01-30 13:20:26.367951689 +0000 UTC m=+991.028632916" lastFinishedPulling="2026-01-30 13:20:29.328118819 +0000 UTC m=+993.988800056" observedRunningTime="2026-01-30 13:20:30.429437842 +0000 UTC m=+995.090119119" watchObservedRunningTime="2026-01-30 13:20:30.433754908 +0000 UTC m=+995.094436175" Jan 30 13:20:34 crc kubenswrapper[5039]: I0130 13:20:34.444373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-np244" event={"ID":"9fc67884-3169-4fc2-98e9-1a3a274f9f02","Type":"ContainerStarted","Data":"6962e290d5aecca03e9bbae562b705e0a83aab999422fc7219cd2cc17859742f"} Jan 30 13:20:34 crc kubenswrapper[5039]: I0130 13:20:34.460374 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-np244" podStartSLOduration=2.695373293 podStartE2EDuration="6.460356573s" podCreationTimestamp="2026-01-30 13:20:28 +0000 UTC" firstStartedPulling="2026-01-30 13:20:29.802005413 +0000 UTC m=+994.462686640" lastFinishedPulling="2026-01-30 13:20:33.566988693 +0000 UTC m=+998.227669920" observedRunningTime="2026-01-30 13:20:34.458239556 +0000 UTC m=+999.118920803" watchObservedRunningTime="2026-01-30 13:20:34.460356573 +0000 UTC m=+999.121037800" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.092214 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.092263 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.137980 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.484139 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.548090 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.549195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.567171 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.742863 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.742958 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830582 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830638 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.831186 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.831410 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.853174 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.902543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:38 crc kubenswrapper[5039]: I0130 13:20:38.353218 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:38 crc kubenswrapper[5039]: I0130 13:20:38.470886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"4cb98fe14a48c09e84a7de456f5afe1b6eff3162b8374486d55b596238fcd728"} Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.067073 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.068100 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.097509 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.479862 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165" exitCode=0 Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.479979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165"} Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.510552 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.930144 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.930366 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffqhl" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" containerID="cri-o://a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" gracePeriod=2 Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.501036 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" exitCode=0 Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.501204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279"} Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.774394 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.908978 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.909168 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.909275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.910429 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities" (OuterVolumeSpecName: "utilities") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.916977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh" (OuterVolumeSpecName: "kube-api-access-njwbh") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "kube-api-access-njwbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.010854 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.010895 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182431 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182791 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-utilities" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182819 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-utilities" Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182863 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182891 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-content" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182905 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-content" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.183158 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.184550 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.193574 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-cznvv" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.201733 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.216502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.223573 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325635 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325697 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426792 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426863 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.427548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.427563 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.454218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.502979 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"697f98cb1d0856ddbf8fb8218d59ab1ff83628e4f4bf489087cadd43f7d1baf0"} Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513562 5039 scope.go:117] "RemoveContainer" containerID="a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513638 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.564502 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.570151 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.574032 5039 scope.go:117] "RemoveContainer" containerID="e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.885086 5039 scope.go:117] "RemoveContainer" containerID="286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea" Jan 30 13:20:43 crc kubenswrapper[5039]: I0130 13:20:43.535825 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467"} Jan 30 13:20:43 crc kubenswrapper[5039]: I0130 13:20:43.812527 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:43 crc kubenswrapper[5039]: W0130 13:20:43.869313 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4062e1_3451_42b4_aaed_3dee60006639.slice/crio-18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2 WatchSource:0}: Error finding container 18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2: Status 404 returned error can't find the container with id 18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2 Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.104477 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" path="/var/lib/kubelet/pods/f37cdf31-440f-4f86-a022-ba3e635cc7c4/volumes" Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.542543 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="fb603bcc98834c14462f63a27c324ed39597a4342791fb2421b78425ef89601e" exitCode=0 Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.543161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"fb603bcc98834c14462f63a27c324ed39597a4342791fb2421b78425ef89601e"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.543199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerStarted","Data":"18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.545953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.545830 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467" exitCode=0 Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.542576 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.546056 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549421 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549479 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.552904 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.588188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd"} Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.590340 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="f2ad95c89c743ce5ff5903a3373b9ab6565a78725ca7ec7dcb78df1900f5b3e3" exitCode=0 Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.590503 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"f2ad95c89c743ce5ff5903a3373b9ab6565a78725ca7ec7dcb78df1900f5b3e3"} Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.617561 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-28b82" podStartSLOduration=3.786915798 podStartE2EDuration="11.617545131s" podCreationTimestamp="2026-01-30 13:20:37 +0000 UTC" firstStartedPulling="2026-01-30 13:20:39.482938333 +0000 UTC m=+1004.143619600" lastFinishedPulling="2026-01-30 13:20:47.313567696 +0000 UTC m=+1011.974248933" observedRunningTime="2026-01-30 13:20:48.609606248 +0000 UTC m=+1013.270287495" watchObservedRunningTime="2026-01-30 13:20:48.617545131 +0000 UTC m=+1013.278226358" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.650913 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.650947 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651605 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.685473 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.899782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.138985 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596495 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390" exitCode=0 Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerStarted","Data":"492833adee0d9137352f7d2954ba6f7de17a6cea50fb87b7f20b7264a0109012"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.599592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerStarted","Data":"eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.639203 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" podStartSLOduration=4.871214727 podStartE2EDuration="7.639185995s" podCreationTimestamp="2026-01-30 13:20:42 +0000 UTC" firstStartedPulling="2026-01-30 13:20:44.543759379 +0000 UTC m=+1009.204440606" lastFinishedPulling="2026-01-30 13:20:47.311730617 +0000 UTC m=+1011.972411874" observedRunningTime="2026-01-30 13:20:49.63748944 +0000 UTC m=+1014.298170727" watchObservedRunningTime="2026-01-30 13:20:49.639185995 +0000 UTC m=+1014.299867222" Jan 30 13:20:50 crc kubenswrapper[5039]: I0130 13:20:50.614361 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3" exitCode=0 Jan 30 13:20:50 crc kubenswrapper[5039]: I0130 13:20:50.614427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3"} Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.860794 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.911733 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.912154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.912258 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.913275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle" (OuterVolumeSpecName: "bundle") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.917065 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9" (OuterVolumeSpecName: "kube-api-access-hs2z9") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "kube-api-access-hs2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.922635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util" (OuterVolumeSpecName: "util") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014233 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014322 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014345 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.632698 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.632716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2"} Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.633389 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.635705 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581" exitCode=0 Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.635868 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581"} Jan 30 13:20:55 crc kubenswrapper[5039]: I0130 13:20:55.658101 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerStarted","Data":"52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f"} Jan 30 13:20:55 crc kubenswrapper[5039]: I0130 13:20:55.673082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqmh8" podStartSLOduration=2.616787956 podStartE2EDuration="7.673059582s" podCreationTimestamp="2026-01-30 13:20:48 +0000 UTC" firstStartedPulling="2026-01-30 13:20:49.597905587 +0000 UTC m=+1014.258586814" lastFinishedPulling="2026-01-30 13:20:54.654177213 +0000 UTC m=+1019.314858440" observedRunningTime="2026-01-30 13:20:55.672816465 +0000 UTC m=+1020.333497702" watchObservedRunningTime="2026-01-30 13:20:55.673059582 +0000 UTC m=+1020.333740849" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.903478 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.903566 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.946805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.716846 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.900751 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.901049 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.939454 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.251343 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.251825 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="util" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.251894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="util" Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.251966 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="pull" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252044 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="pull" Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.252103 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252159 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252324 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252812 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.257361 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4crh4" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.288763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.316894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.417720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.437760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.571143 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.735927 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:00 crc kubenswrapper[5039]: W0130 13:21:00.020109 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda15d311_1be3_49c8_9283_5f4815b0a42d.slice/crio-ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4 WatchSource:0}: Error finding container ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4: Status 404 returned error can't find the container with id ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4 Jan 30 13:21:00 crc kubenswrapper[5039]: I0130 13:21:00.032368 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:21:00 crc kubenswrapper[5039]: I0130 13:21:00.692592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" event={"ID":"da15d311-1be3-49c8-9283-5f4815b0a42d","Type":"ContainerStarted","Data":"ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4"} Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.132340 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.531184 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.531445 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-28b82" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" containerID="cri-o://826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" gracePeriod=2 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734306 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" exitCode=0 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734709 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqmh8" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" containerID="cri-o://52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" gracePeriod=2 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd"} Jan 30 13:21:03 crc kubenswrapper[5039]: I0130 13:21:03.744504 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" exitCode=0 Jan 30 13:21:03 crc kubenswrapper[5039]: I0130 13:21:03.744575 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.077923 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192367 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192423 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.194186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities" (OuterVolumeSpecName: "utilities") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.211761 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj" (OuterVolumeSpecName: "kube-api-access-4kswj") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "kube-api-access-4kswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.246251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293646 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293696 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293713 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.659859 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698593 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.699548 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities" (OuterVolumeSpecName: "utilities") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.701399 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz" (OuterVolumeSpecName: "kube-api-access-xsnvz") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "kube-api-access-xsnvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.735202 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753492 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"4cb98fe14a48c09e84a7de456f5afe1b6eff3162b8374486d55b596238fcd728"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753539 5039 scope.go:117] "RemoveContainer" containerID="826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753534 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.766760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"492833adee0d9137352f7d2954ba6f7de17a6cea50fb87b7f20b7264a0109012"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.766896 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799084 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799893 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799935 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799945 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.809863 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.814278 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.819311 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.836225 5039 scope.go:117] "RemoveContainer" containerID="dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.849577 5039 scope.go:117] "RemoveContainer" containerID="6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.885248 5039 scope.go:117] "RemoveContainer" containerID="52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.900430 5039 scope.go:117] "RemoveContainer" containerID="bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.916878 5039 scope.go:117] "RemoveContainer" containerID="bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390" Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.775609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" event={"ID":"da15d311-1be3-49c8-9283-5f4815b0a42d","Type":"ContainerStarted","Data":"53ca004a8adcb3c811e5d38d0d4e950623424c2878bd35266db8cd6a1cbd5957"} Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.776946 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.821479 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" podStartSLOduration=1.9360704979999999 podStartE2EDuration="6.821460774s" podCreationTimestamp="2026-01-30 13:20:59 +0000 UTC" firstStartedPulling="2026-01-30 13:21:00.023912684 +0000 UTC m=+1024.684593911" lastFinishedPulling="2026-01-30 13:21:04.90930295 +0000 UTC m=+1029.569984187" observedRunningTime="2026-01-30 13:21:05.816797128 +0000 UTC m=+1030.477478375" watchObservedRunningTime="2026-01-30 13:21:05.821460774 +0000 UTC m=+1030.482142021" Jan 30 13:21:06 crc kubenswrapper[5039]: I0130 13:21:06.102990 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" path="/var/lib/kubelet/pods/936b34c4-5842-460b-bf36-a3ce510ab879/volumes" Jan 30 13:21:06 crc kubenswrapper[5039]: I0130 13:21:06.103854 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" path="/var/lib/kubelet/pods/b531b1bc-080d-45d1-a22b-77a257d5f32d/volumes" Jan 30 13:21:07 crc kubenswrapper[5039]: I0130 13:21:07.742487 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:21:07 crc kubenswrapper[5039]: I0130 13:21:07.742928 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:21:19 crc kubenswrapper[5039]: I0130 13:21:19.575102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.742677 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743359 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743399 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743980 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.744052 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" gracePeriod=600 Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994377 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" exitCode=0 Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994415 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994810 5039 scope.go:117] "RemoveContainer" containerID="dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" Jan 30 13:21:39 crc kubenswrapper[5039]: I0130 13:21:39.001934 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.183372 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184195 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184230 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184236 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184243 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184249 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184272 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184282 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184287 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184296 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184395 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184415 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184864 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.186623 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dh998" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.190114 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.191289 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.193894 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-725vq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.198229 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.200614 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.210001 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.210837 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.216666 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-t4whl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.229269 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.230103 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.235292 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cftnt" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.247922 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.258353 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.264576 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.265358 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.274320 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.279536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.280937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.281563 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hdd26" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.285245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.300362 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-b59wl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351698 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.366678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.367463 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.371121 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rjm9f" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.371135 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.381521 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.382318 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.386197 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.386891 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.387367 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qzk7m" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.394475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9mggs" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.403084 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.429889 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.445581 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453793 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453823 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453861 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453936 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.457590 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.458583 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.467470 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.473441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bkrs5" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.474631 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.475397 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.491390 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nqm6z" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.502020 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.502099 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.510663 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.511641 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.520656 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.520932 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.530633 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.536281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.543003 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.543800 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.554841 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rpzqr" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.558383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.559491 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.561183 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.561618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562059 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562096 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562152 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.562475 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.562534 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.062514561 +0000 UTC m=+1070.723195788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.563731 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.567381 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.576731 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.585312 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.589080 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9pm2s" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.599494 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.623573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.624043 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.625764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.662159 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.664037 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665542 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665581 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665638 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665671 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.676772 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6jbz6" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.696915 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.730252 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.737083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.737674 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.748248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.748877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.759758 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.760821 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.767577 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zsbmv" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.768433 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.769444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.770604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.775369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.772084 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.775551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.772516 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-phk2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.792818 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.802132 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.805540 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.818419 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.820576 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.845129 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.850630 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.856571 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-57h89" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.866742 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884592 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884629 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884702 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.898483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.922079 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.922941 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.928347 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dgp2m" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.930220 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.942750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.960945 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.973356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.977193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.979468 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.982997 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ffkb6" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987269 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987331 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987390 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.987553 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.987604 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.487587299 +0000 UTC m=+1071.148268526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.994379 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:45.997840 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.011861 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.012825 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.013671 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.015930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.018845 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jvkkj" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.025409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.028803 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.032845 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.033642 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.042053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.048832 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.049115 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jsfgk" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.049482 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.084575 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.085666 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089087 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089267 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089345 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xn55h" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089518 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.094334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.098472 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.098561 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.098807 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.098861 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.09884301 +0000 UTC m=+1071.759524237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.123585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.140739 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.180955 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.181661 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.181728 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.184603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.202308 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t842d" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.226272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.226471 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230629 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.262079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.264606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.266498 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.278964 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.300893 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331777 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332111 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332168 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.832150476 +0000 UTC m=+1071.492831703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332619 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332712 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.83269119 +0000 UTC m=+1071.493372427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.358594 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.365798 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.381867 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.399973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.436220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.457666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.529343 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.541552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.541728 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.541778 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.541764418 +0000 UTC m=+1072.202445645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.732409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.754638 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.810452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.819637 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.824352 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.829249 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.836450 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e4cf6d_c270_4781_b68c_be66be87eda0.slice/crio-fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a WatchSource:0}: Error finding container fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a: Status 404 returned error can't find the container with id fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.838409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.843007 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ad0072a_71a8_4fd8_9f4d_39ffd8a63530.slice/crio-9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632 WatchSource:0}: Error finding container 9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632: Status 404 returned error can't find the container with id 9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632 Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.843677 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.846515 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393972fe_41f4_41b3_b5e9_c2183a2a506c.slice/crio-95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc WatchSource:0}: Error finding container 95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc: Status 404 returned error can't find the container with id 95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.846775 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.846894 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847079 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847101 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847142 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.847122253 +0000 UTC m=+1072.507803480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847162 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.847153963 +0000 UTC m=+1072.507835290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.849614 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda84f3cb3_ab4e_4780_bfac_295411bfca5f.slice/crio-0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57 WatchSource:0}: Error finding container 0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57: Status 404 returned error can't find the container with id 0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57 Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.856537 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0f8b45_595e_434a_afd7_bc054252c589.slice/crio-c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559 WatchSource:0}: Error finding container c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559: Status 404 returned error can't find the container with id c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559 Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.096045 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" event={"ID":"119bb853-2462-447e-bedc-54a2d5e2ba7f","Type":"ContainerStarted","Data":"1dac49d32e16d79ca05cc3bbc5011bb4a77a61a7d0a522eca0c8cf1f59ecbd60"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.099780 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" event={"ID":"a84f3cb3-ab4e-4780-bfac-295411bfca5f","Type":"ContainerStarted","Data":"0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.100807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" event={"ID":"e0e4cf6d-c270-4781-b68c-be66be87eda0","Type":"ContainerStarted","Data":"fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.102461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" event={"ID":"be0f8b45-595e-434a-afd7-bc054252c589","Type":"ContainerStarted","Data":"c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.103861 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" event={"ID":"46f5b983-ce89-42e5-8fc0-7145badf07df","Type":"ContainerStarted","Data":"f454b57906080fa381bef0548e68dc128e69ba30d629d7b624822df3f5713aef"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.105155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" event={"ID":"393972fe-41f4-41b3-b5e9-c2183a2a506c","Type":"ContainerStarted","Data":"95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.106060 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" event={"ID":"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530","Type":"ContainerStarted","Data":"9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.107361 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" event={"ID":"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5","Type":"ContainerStarted","Data":"450001e1153715f094b5059a633e3b8d625f497bbb542f90e08ad979c0bbd69e"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.108979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" event={"ID":"a7002b43-9266-4930-8baa-d60085738bbf","Type":"ContainerStarted","Data":"145b92f8eee61e4583d81ce3900c6f3ebcb81b4597bcde1dc528e1afd8b7553b"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.151133 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.151347 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.151437 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.151418329 +0000 UTC m=+1073.812099556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.247059 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.264846 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.276083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.286719 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnkzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6fd9bbb6f6-8vmk2_openstack-operators(f88d8b4c-e64a-46de-8566-c17112f9379d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.287905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.289223 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.292239 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb74de1a1_6d53_416d_a626_3307e43fb1a9.slice/crio-e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52 WatchSource:0}: Error finding container e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52: Status 404 returned error can't find the container with id e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52 Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.294060 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llvtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7d4f9d9c9b-j5l2r_openstack-operators(4af84b30-6340-4e2a-b4fc-79268b9cb491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.294528 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzx86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5bf648c946-vwwqt_openstack-operators(b74de1a1-6d53-416d-a626-3307e43fb1a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.295141 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.295776 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.298784 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9xrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cd99594-2gs8r_openstack-operators(030095cc-213a-4228-a2d5-62e91816f44e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.299413 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35170745_facc_414b_9c48_649af86aeeb6.slice/crio-8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9 WatchSource:0}: Error finding container 8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9: Status 404 returned error can't find the container with id 8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9 Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.300150 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.300361 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.301139 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4240d443_bebd_4831_aaf2_0548c4d30a60.slice/crio-f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d WatchSource:0}: Error finding container f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d: Status 404 returned error can't find the container with id f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.303241 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svbnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-zxtd4_openstack-operators(35170745-facc-414b-9c48-649af86aeeb6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.304321 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.305035 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vs6x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-qf8zq_openstack-operators(4240d443-bebd-4831-aaf2-0548c4d30a60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.306921 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.312113 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.316884 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hjqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-78q8w_openstack-operators(d523ce30-8e42-407b-bb30-2e8aedb76c0c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.317985 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.320600 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.327966 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.335840 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.343839 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.349770 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.563006 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.563265 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.563552 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.563526386 +0000 UTC m=+1074.224207693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.867463 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.867578 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867743 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867809 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.867790571 +0000 UTC m=+1074.528471788 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867877 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867909 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.867900274 +0000 UTC m=+1074.528581501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.135964 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" event={"ID":"030095cc-213a-4228-a2d5-62e91816f44e","Type":"ContainerStarted","Data":"7fb12fa4ec06883fabc0012bd5e15637c4fdbe58142f7928200276fb3192728c"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.140390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" event={"ID":"7792d72c-9fec-4de1-aaff-90764148b8d1","Type":"ContainerStarted","Data":"f6da4863a759cbc758f048d43a42369f67c1a2ef5a3748260d0ee2a03a294d98"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.142551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" event={"ID":"b74de1a1-6d53-416d-a626-3307e43fb1a9","Type":"ContainerStarted","Data":"e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.157223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" event={"ID":"f88d8b4c-e64a-46de-8566-c17112f9379d","Type":"ContainerStarted","Data":"eb911030bf71e47de05c6b0c36a3a28b676202ffae7ecf1138cf7012ae103646"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.168120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" event={"ID":"d2b8a86d-d798-4591-8f13-70f20fbe944d","Type":"ContainerStarted","Data":"c3b5b5a40364342153a6848fa4d8f9da020d8cb26e9e0f5d7644a435e14c369d"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.173819 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" event={"ID":"5b341b5c-d0a9-4e32-bc5a-7e669840a358","Type":"ContainerStarted","Data":"b304733d7a51745e1ec37d075e29bac93057f040700529de7f0b6e6b6cfa47d5"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176085 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176383 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176448 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.179359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" event={"ID":"aea15f55-ce7e-4253-9a45-a6a9657ebf04","Type":"ContainerStarted","Data":"4579756a83a65d751f23fdbed3e453299538dc3e14131fc22ca1d999d621ae8d"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.180968 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" event={"ID":"d523ce30-8e42-407b-bb30-2e8aedb76c0c","Type":"ContainerStarted","Data":"e49818f417ad971b24d2d5cc29368aa64cba19198b3bcac920d5bd80ae15c3b9"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.186221 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.189389 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" event={"ID":"4af84b30-6340-4e2a-b4fc-79268b9cb491","Type":"ContainerStarted","Data":"b4fe4c0510b4bd6cbfb5f30ddda523f8d705eee6839dd4b84c16912b2c630dcf"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.190772 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.193199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" event={"ID":"35170745-facc-414b-9c48-649af86aeeb6","Type":"ContainerStarted","Data":"8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.194376 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.194486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" event={"ID":"4240d443-bebd-4831-aaf2-0548c4d30a60","Type":"ContainerStarted","Data":"f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.195589 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.187937 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.188144 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.188217 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.188200595 +0000 UTC m=+1077.848881822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205236 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205243 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205285 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205292 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205306 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.210779 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.215668 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.600743 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.601073 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.601127 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.601110113 +0000 UTC m=+1078.261791350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.910705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.910808 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.910986 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911069 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.911049228 +0000 UTC m=+1078.571730445 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911155 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911255 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.911231413 +0000 UTC m=+1078.571912690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.266984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.267214 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.267595 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.267572912 +0000 UTC m=+1085.928254149 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.673669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.673876 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.673962 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.673939567 +0000 UTC m=+1086.334620874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.977649 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.977835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978066 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978221 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.978184712 +0000 UTC m=+1086.638866029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978084 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978394 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.978352107 +0000 UTC m=+1086.639033424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.897201 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.897754 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twlx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-n5fbd_openstack-operators(aea15f55-ce7e-4253-9a45-a6a9657ebf04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.899033 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podUID="aea15f55-ce7e-4253-9a45-a6a9657ebf04" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.290155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" event={"ID":"46f5b983-ce89-42e5-8fc0-7145badf07df","Type":"ContainerStarted","Data":"1fd7027609f8be83771c7836abef86e282c26d2ca1fd3a6590de1077bf2cf917"} Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.290446 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.292827 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" event={"ID":"a7002b43-9266-4930-8baa-d60085738bbf","Type":"ContainerStarted","Data":"066bc46eea5ab968519419790a727d3cabb1dce6bf70e562de2cb706d4f13c85"} Jan 30 13:21:59 crc kubenswrapper[5039]: E0130 13:21:59.294446 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podUID="aea15f55-ce7e-4253-9a45-a6a9657ebf04" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.317260 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" podStartSLOduration=1.654003924 podStartE2EDuration="14.317239363s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.35772404 +0000 UTC m=+1071.018405257" lastFinishedPulling="2026-01-30 13:21:59.020959429 +0000 UTC m=+1083.681640696" observedRunningTime="2026-01-30 13:21:59.305656468 +0000 UTC m=+1083.966337685" watchObservedRunningTime="2026-01-30 13:21:59.317239363 +0000 UTC m=+1083.977920600" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.328556 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" podStartSLOduration=2.134681738 podStartE2EDuration="14.328533941s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.831376698 +0000 UTC m=+1071.492057935" lastFinishedPulling="2026-01-30 13:21:59.025228901 +0000 UTC m=+1083.685910138" observedRunningTime="2026-01-30 13:21:59.324155685 +0000 UTC m=+1083.984836942" watchObservedRunningTime="2026-01-30 13:21:59.328533941 +0000 UTC m=+1083.989215168" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.303034 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" event={"ID":"393972fe-41f4-41b3-b5e9-c2183a2a506c","Type":"ContainerStarted","Data":"24d5ee1a8e3020e56a8f78556e3794750b49d67c4518ff0fec94a34b089bce9b"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.303722 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.304886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" event={"ID":"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530","Type":"ContainerStarted","Data":"957b671cc257adbad409710805273aab390f5bdea16e4c6afca707b923b42801"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.305446 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.307056 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" event={"ID":"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5","Type":"ContainerStarted","Data":"358b4c7c989526d600f0b3216d2c777ef1daa615e38ee4db8208da645d41d7c6"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.307560 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.309197 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" event={"ID":"e0e4cf6d-c270-4781-b68c-be66be87eda0","Type":"ContainerStarted","Data":"dbcd676d596a2c8cdf8d85b65c8fa26c52b6fefb500e6f240963e206baa61d18"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.309682 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.312602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" event={"ID":"be0f8b45-595e-434a-afd7-bc054252c589","Type":"ContainerStarted","Data":"8c97f7aec5cf0a8d56e18ff2990110105666c6dbb3cc4d9ba593ebacbef379ec"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.314331 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" event={"ID":"7792d72c-9fec-4de1-aaff-90764148b8d1","Type":"ContainerStarted","Data":"67de7301e18294f4045cf0316f52b5863428a29d35ed0e85c28a23578601948b"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.314884 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.316670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" event={"ID":"d2b8a86d-d798-4591-8f13-70f20fbe944d","Type":"ContainerStarted","Data":"78d3a5fa671e9a9ce9c7000e728c25f3262448de4c80d201772eeca65d2b186e"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.316829 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.321709 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" event={"ID":"5b341b5c-d0a9-4e32-bc5a-7e669840a358","Type":"ContainerStarted","Data":"8e7dbda1a74e21f37c1d511753fd44349b667f4771810b81544670f4f08bae3e"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.322025 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.323380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" event={"ID":"119bb853-2462-447e-bedc-54a2d5e2ba7f","Type":"ContainerStarted","Data":"d805d4319e01bdbf983319f174ab3b615f5e03bdec005e2fa31d987fe74ff5be"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.323419 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" event={"ID":"a84f3cb3-ab4e-4780-bfac-295411bfca5f","Type":"ContainerStarted","Data":"e92946b918720549c9a7e35adf57c29a20b341c6a7a474b1f61a3b6f399a0d9a"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325418 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325428 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.332666 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" podStartSLOduration=3.158227752 podStartE2EDuration="15.332650303s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.85080172 +0000 UTC m=+1071.511482947" lastFinishedPulling="2026-01-30 13:21:59.025224231 +0000 UTC m=+1083.685905498" observedRunningTime="2026-01-30 13:22:00.326170192 +0000 UTC m=+1084.986851419" watchObservedRunningTime="2026-01-30 13:22:00.332650303 +0000 UTC m=+1084.993331530" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.357458 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" podStartSLOduration=3.618636261 podStartE2EDuration="15.357436316s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28392643 +0000 UTC m=+1071.944607657" lastFinishedPulling="2026-01-30 13:21:59.022726445 +0000 UTC m=+1083.683407712" observedRunningTime="2026-01-30 13:22:00.356726617 +0000 UTC m=+1085.017407864" watchObservedRunningTime="2026-01-30 13:22:00.357436316 +0000 UTC m=+1085.018117543" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.423798 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" podStartSLOduration=3.133281475 podStartE2EDuration="15.423776684s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.739049856 +0000 UTC m=+1071.399731083" lastFinishedPulling="2026-01-30 13:21:59.029545045 +0000 UTC m=+1083.690226292" observedRunningTime="2026-01-30 13:22:00.403625703 +0000 UTC m=+1085.064306950" watchObservedRunningTime="2026-01-30 13:22:00.423776684 +0000 UTC m=+1085.084457921" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.429376 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" podStartSLOduration=3.243880939 podStartE2EDuration="15.429354261s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.838776543 +0000 UTC m=+1071.499457770" lastFinishedPulling="2026-01-30 13:21:59.024249825 +0000 UTC m=+1083.684931092" observedRunningTime="2026-01-30 13:22:00.420924619 +0000 UTC m=+1085.081605856" watchObservedRunningTime="2026-01-30 13:22:00.429354261 +0000 UTC m=+1085.090035488" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.449974 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" podStartSLOduration=3.28803398 podStartE2EDuration="15.449949133s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.862261301 +0000 UTC m=+1071.522942528" lastFinishedPulling="2026-01-30 13:21:59.024176424 +0000 UTC m=+1083.684857681" observedRunningTime="2026-01-30 13:22:00.440537045 +0000 UTC m=+1085.101218272" watchObservedRunningTime="2026-01-30 13:22:00.449949133 +0000 UTC m=+1085.110630360" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.464732 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" podStartSLOduration=3.2940071189999998 podStartE2EDuration="15.464710242s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.855240747 +0000 UTC m=+1071.515921974" lastFinishedPulling="2026-01-30 13:21:59.02594386 +0000 UTC m=+1083.686625097" observedRunningTime="2026-01-30 13:22:00.459269799 +0000 UTC m=+1085.119951026" watchObservedRunningTime="2026-01-30 13:22:00.464710242 +0000 UTC m=+1085.125391479" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.482073 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" podStartSLOduration=3.726937344 podStartE2EDuration="15.482054399s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.270931938 +0000 UTC m=+1071.931613165" lastFinishedPulling="2026-01-30 13:21:59.026048953 +0000 UTC m=+1083.686730220" observedRunningTime="2026-01-30 13:22:00.480493528 +0000 UTC m=+1085.141174755" watchObservedRunningTime="2026-01-30 13:22:00.482054399 +0000 UTC m=+1085.142735646" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.504466 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" podStartSLOduration=3.329447302 podStartE2EDuration="15.504445619s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.851119178 +0000 UTC m=+1071.511800405" lastFinishedPulling="2026-01-30 13:21:59.026117455 +0000 UTC m=+1083.686798722" observedRunningTime="2026-01-30 13:22:00.503871704 +0000 UTC m=+1085.164552951" watchObservedRunningTime="2026-01-30 13:22:00.504445619 +0000 UTC m=+1085.165126866" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.526503 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" podStartSLOduration=3.782718063 podStartE2EDuration="15.526468679s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28507947 +0000 UTC m=+1071.945760697" lastFinishedPulling="2026-01-30 13:21:59.028830066 +0000 UTC m=+1083.689511313" observedRunningTime="2026-01-30 13:22:00.524949859 +0000 UTC m=+1085.185631116" watchObservedRunningTime="2026-01-30 13:22:00.526468679 +0000 UTC m=+1085.187149906" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.557652 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" podStartSLOduration=3.273096388 podStartE2EDuration="15.55762973s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.740911105 +0000 UTC m=+1071.401592332" lastFinishedPulling="2026-01-30 13:21:59.025444407 +0000 UTC m=+1083.686125674" observedRunningTime="2026-01-30 13:22:00.555265498 +0000 UTC m=+1085.215946725" watchObservedRunningTime="2026-01-30 13:22:00.55762973 +0000 UTC m=+1085.218310967" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.282368 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.287573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.301832 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rjm9f" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.309623 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.336836 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.689069 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.689259 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.689338 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.689314653 +0000 UTC m=+1102.349995880 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.807758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.993607 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.993710 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993781 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993840 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.993822885 +0000 UTC m=+1102.654504112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993910 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.994020 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.993971099 +0000 UTC m=+1102.654652396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.354124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" event={"ID":"a0e32430-f729-40dc-a6a9-307f01744381","Type":"ContainerStarted","Data":"a84c25e85642a684fe221c3b43dcb426bda2fc7075d76ab735fc689788e06398"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.355764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" event={"ID":"b74de1a1-6d53-416d-a626-3307e43fb1a9","Type":"ContainerStarted","Data":"32781a488b5e20c1940df73d559b8deb82cb5a5e9c9dee56e98bd6dc1237bbbb"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.355987 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.357661 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" event={"ID":"f88d8b4c-e64a-46de-8566-c17112f9379d","Type":"ContainerStarted","Data":"9e84ff9bfdf64701c33cac72b46632b81ad470105e921fca8962a2c6b41e5e2f"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.357889 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.378076 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podStartSLOduration=2.560511337 podStartE2EDuration="18.378057851s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.294439157 +0000 UTC m=+1071.955120384" lastFinishedPulling="2026-01-30 13:22:03.111985671 +0000 UTC m=+1087.772666898" observedRunningTime="2026-01-30 13:22:03.368834488 +0000 UTC m=+1088.029515735" watchObservedRunningTime="2026-01-30 13:22:03.378057851 +0000 UTC m=+1088.038739098" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.390121 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podStartSLOduration=2.556752627 podStartE2EDuration="18.390104178s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.286560399 +0000 UTC m=+1071.947241626" lastFinishedPulling="2026-01-30 13:22:03.11991195 +0000 UTC m=+1087.780593177" observedRunningTime="2026-01-30 13:22:03.385128917 +0000 UTC m=+1088.045810164" watchObservedRunningTime="2026-01-30 13:22:03.390104178 +0000 UTC m=+1088.050785415" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.523487 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.547346 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.576615 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.589967 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.628500 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.795820 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.808504 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.934359 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.982784 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.044736 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.051907 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.188271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.032053 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.402213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.449961 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" event={"ID":"a0e32430-f729-40dc-a6a9-307f01744381","Type":"ContainerStarted","Data":"1d926a5e150aad4475833a63de09e3f327abda84f27c24ced7e9f5a24640d328"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.450035 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.451730 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" event={"ID":"35170745-facc-414b-9c48-649af86aeeb6","Type":"ContainerStarted","Data":"2d17d3fda7045c31e6292442823f5749b8df16054a373a77056e56856be92680"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.451983 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.453335 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" event={"ID":"4240d443-bebd-4831-aaf2-0548c4d30a60","Type":"ContainerStarted","Data":"fe13690ed761dc3281a866b74cd4ece9cc7fd7ab34d269a3eea898a7d12e67c6"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.453602 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.454653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" event={"ID":"030095cc-213a-4228-a2d5-62e91816f44e","Type":"ContainerStarted","Data":"a5790f1c767db6f0d7b98c5e178ec23431068e6a7a803a3c21fcd3528daa65fd"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.455050 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.457557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" event={"ID":"4af84b30-6340-4e2a-b4fc-79268b9cb491","Type":"ContainerStarted","Data":"7062cd26e4185c94e47f388d0c92c180e6f90358cf12d5d2e60845c2074c643e"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.458114 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.465852 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" podStartSLOduration=18.486475257 podStartE2EDuration="31.465836523s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:02.3827179 +0000 UTC m=+1087.043399127" lastFinishedPulling="2026-01-30 13:22:15.362079156 +0000 UTC m=+1100.022760393" observedRunningTime="2026-01-30 13:22:16.465602427 +0000 UTC m=+1101.126283664" watchObservedRunningTime="2026-01-30 13:22:16.465836523 +0000 UTC m=+1101.126517760" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.489575 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podStartSLOduration=2.835955512 podStartE2EDuration="31.489558628s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.298669108 +0000 UTC m=+1071.959350335" lastFinishedPulling="2026-01-30 13:22:15.952272224 +0000 UTC m=+1100.612953451" observedRunningTime="2026-01-30 13:22:16.484389602 +0000 UTC m=+1101.145070839" watchObservedRunningTime="2026-01-30 13:22:16.489558628 +0000 UTC m=+1101.150239865" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.502279 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podStartSLOduration=2.853430093 podStartE2EDuration="31.502256243s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.302793587 +0000 UTC m=+1071.963474814" lastFinishedPulling="2026-01-30 13:22:15.951619737 +0000 UTC m=+1100.612300964" observedRunningTime="2026-01-30 13:22:16.496111501 +0000 UTC m=+1101.156792758" watchObservedRunningTime="2026-01-30 13:22:16.502256243 +0000 UTC m=+1101.162937480" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.518122 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podStartSLOduration=2.847831855 podStartE2EDuration="31.51809661s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.293915933 +0000 UTC m=+1071.954597160" lastFinishedPulling="2026-01-30 13:22:15.964180698 +0000 UTC m=+1100.624861915" observedRunningTime="2026-01-30 13:22:16.512997136 +0000 UTC m=+1101.173678373" watchObservedRunningTime="2026-01-30 13:22:16.51809661 +0000 UTC m=+1101.178777857" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.540973 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podStartSLOduration=3.483643845 podStartE2EDuration="31.540952042s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.304772909 +0000 UTC m=+1071.965454136" lastFinishedPulling="2026-01-30 13:22:15.362081086 +0000 UTC m=+1100.022762333" observedRunningTime="2026-01-30 13:22:16.539021231 +0000 UTC m=+1101.199702458" watchObservedRunningTime="2026-01-30 13:22:16.540952042 +0000 UTC m=+1101.201633279" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.464998 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" event={"ID":"d523ce30-8e42-407b-bb30-2e8aedb76c0c","Type":"ContainerStarted","Data":"59aeb85108dbd9ecbb7c2387736f363c105307a6b0e93670815287fb11619a9d"} Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.467271 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" event={"ID":"aea15f55-ce7e-4253-9a45-a6a9657ebf04","Type":"ContainerStarted","Data":"9cd3aa05ba79e565b915be2e1d4dc5ab5e8a01a4a98d8edeca11513181f29a71"} Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.467564 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.480221 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podStartSLOduration=2.257142977 podStartE2EDuration="31.480207955s" podCreationTimestamp="2026-01-30 13:21:46 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.316770405 +0000 UTC m=+1071.977451632" lastFinishedPulling="2026-01-30 13:22:16.539835393 +0000 UTC m=+1101.200516610" observedRunningTime="2026-01-30 13:22:17.478128081 +0000 UTC m=+1102.138809308" watchObservedRunningTime="2026-01-30 13:22:17.480207955 +0000 UTC m=+1102.140889182" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.767998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.779686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.924356 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-phk2r" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.932150 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.071448 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.071549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.082340 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.090251 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.227762 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xn55h" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.238581 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.395946 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podStartSLOduration=4.146869847 podStartE2EDuration="33.395929639s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28392346 +0000 UTC m=+1071.944604687" lastFinishedPulling="2026-01-30 13:22:16.532983252 +0000 UTC m=+1101.193664479" observedRunningTime="2026-01-30 13:22:17.497289806 +0000 UTC m=+1102.157971033" watchObservedRunningTime="2026-01-30 13:22:18.395929639 +0000 UTC m=+1103.056610866" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.401894 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.474134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" event={"ID":"bb900788-5fb4-4e83-8eec-f99dba093c60","Type":"ContainerStarted","Data":"04a11801d133642ad4c2ba051996b5f84c2e2591d259ed7b8ae1fdb672bd15aa"} Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.693997 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:22:18 crc kubenswrapper[5039]: W0130 13:22:18.709427 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc0a21f9_046e_450a_bed9_4de7483415f3.slice/crio-8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816 WatchSource:0}: Error finding container 8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816: Status 404 returned error can't find the container with id 8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816 Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.482869 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" event={"ID":"cc0a21f9-046e-450a-bed9-4de7483415f3","Type":"ContainerStarted","Data":"0597872c490e9106b0faf3358003f3b771e7f65f58f160c9d8dc9ac658706768"} Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.483242 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" event={"ID":"cc0a21f9-046e-450a-bed9-4de7483415f3","Type":"ContainerStarted","Data":"8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816"} Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.483632 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.517284 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" podStartSLOduration=34.51726321 podStartE2EDuration="34.51726321s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:22:19.51119838 +0000 UTC m=+1104.171879637" watchObservedRunningTime="2026-01-30 13:22:19.51726321 +0000 UTC m=+1104.177944437" Jan 30 13:22:20 crc kubenswrapper[5039]: I0130 13:22:20.490766 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" event={"ID":"bb900788-5fb4-4e83-8eec-f99dba093c60","Type":"ContainerStarted","Data":"9999bf161ae85ce205c32b678379429eb70ec26d9c8ea5ab21fb5a97f7d95f12"} Jan 30 13:22:20 crc kubenswrapper[5039]: I0130 13:22:20.550806 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" podStartSLOduration=33.906554991 podStartE2EDuration="35.550783037s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:18.407465333 +0000 UTC m=+1103.068146550" lastFinishedPulling="2026-01-30 13:22:20.051693369 +0000 UTC m=+1104.712374596" observedRunningTime="2026-01-30 13:22:20.542722155 +0000 UTC m=+1105.203403402" watchObservedRunningTime="2026-01-30 13:22:20.550783037 +0000 UTC m=+1105.211464264" Jan 30 13:22:21 crc kubenswrapper[5039]: I0130 13:22:21.315500 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:21 crc kubenswrapper[5039]: I0130 13:22:21.499616 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.001655 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.102128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.265680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.303767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.385601 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:22:27 crc kubenswrapper[5039]: I0130 13:22:27.943311 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:28 crc kubenswrapper[5039]: I0130 13:22:28.248880 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.282536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.284638 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287088 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287414 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tz2zn" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287569 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.288187 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.292786 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.341697 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.343910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.348170 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.348788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.354574 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.371445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449359 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449400 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449432 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449458 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449490 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.450508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.469107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551128 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551205 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551685 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.552145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.552554 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.567041 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.600033 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.672787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.047763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:42 crc kubenswrapper[5039]: W0130 13:22:42.055668 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84731f4_eb22_429a_9712_7d5f9504ae03.slice/crio-d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb WatchSource:0}: Error finding container d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb: Status 404 returned error can't find the container with id d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.057807 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:22:42 crc kubenswrapper[5039]: W0130 13:22:42.132214 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eec043b_32d8_4528_9369_405ae0b99e7e.slice/crio-bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a WatchSource:0}: Error finding container bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a: Status 404 returned error can't find the container with id bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.132462 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.646908 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" event={"ID":"6eec043b-32d8-4528-9369-405ae0b99e7e","Type":"ContainerStarted","Data":"bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a"} Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.650557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" event={"ID":"e84731f4-eb22-429a-9712-7d5f9504ae03","Type":"ContainerStarted","Data":"d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb"} Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.867824 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.893879 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.894983 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.921088 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995661 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995707 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.096570 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.096642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.097511 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.097950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.098752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.118472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.173833 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.187200 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.188259 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.204722 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.215692 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.302658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.306194 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.306257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.409874 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410069 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410117 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.411493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.434575 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.507825 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.517351 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: W0130 13:22:44.538722 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7a82611_9333_424b_9772_93de691cc191.slice/crio-ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962 WatchSource:0}: Error finding container ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962: Status 404 returned error can't find the container with id ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962 Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.670279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerStarted","Data":"ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962"} Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.958939 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: W0130 13:22:44.960823 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5cc8ebd_9337_4caa_89f3_546dd8bc31de.slice/crio-cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2 WatchSource:0}: Error finding container cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2: Status 404 returned error can't find the container with id cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2 Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.031688 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.033202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.041891 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.042124 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.042241 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.043243 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.043890 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6qqhf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.044050 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.044231 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.047131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123460 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123487 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123510 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123540 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123634 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123711 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123748 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123821 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.224962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225311 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225353 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225384 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225421 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225602 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.226225 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228449 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228736 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.229112 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233450 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233683 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.234444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.243326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.261972 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.305814 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.307857 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.311590 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.311800 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.312622 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313159 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313419 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313421 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ppg7v" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313546 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.321805 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329562 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329606 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329697 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329763 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329832 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.352810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431650 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431668 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431685 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431958 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.432828 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.438353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.439099 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.439306 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.440447 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.441364 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.445954 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.449575 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.450509 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.472909 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.483664 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.635121 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.677873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerStarted","Data":"cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2"} Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.847232 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: W0130 13:22:45.856795 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31674257_f143_40ab_97b9_dbf3153277c3.slice/crio-0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976 WatchSource:0}: Error finding container 0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976: Status 404 returned error can't find the container with id 0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976 Jan 30 13:22:46 crc kubenswrapper[5039]: W0130 13:22:46.046225 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106954f5_3ea7_4564_8479_407ef02320b7.slice/crio-20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d WatchSource:0}: Error finding container 20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d: Status 404 returned error can't find the container with id 20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.047332 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.549246 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.551121 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556060 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556526 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556694 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vp98d" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.559790 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.566584 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.573451 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656325 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656417 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656496 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656651 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656995 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.657071 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.686146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976"} Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.687404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d"} Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758786 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759056 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759085 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759106 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759195 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759580 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760182 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.761281 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.765811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.769031 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.782476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.789666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.875143 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:22:47 crc kubenswrapper[5039]: I0130 13:22:47.191891 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:47 crc kubenswrapper[5039]: I0130 13:22:47.695115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"fc9e57a17f46c28bd4ab8c2bc3ffa3503691a12bb69fc56089bb8a446d4b34d5"} Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.064377 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.065873 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072286 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072336 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072539 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9n2dh" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.130311 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179497 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179627 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179650 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.180032 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.180263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281912 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281980 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282022 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282049 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282297 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.283005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.283193 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.284154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.299154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.299228 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.306216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.311049 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.375511 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.380972 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.385396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.386236 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tjcn8" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.392238 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.396649 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.401599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484743 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484789 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484890 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586374 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586773 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586812 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.587903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.588346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.590963 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.594789 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.607624 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.698841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.105943 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.106819 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.110513 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkghj" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.117551 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.215422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.317317 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.352691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.473849 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.436569 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.438181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440832 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440981 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.441309 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.441488 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zjq6x" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.448056 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611763 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611806 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611860 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611950 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.612107 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714927 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715222 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.716301 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.716810 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.717573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.723265 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.723656 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.724382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.739303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.741768 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.767771 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.862293 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.863613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867291 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867653 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pqc2p" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.869670 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.871648 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.885649 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.897136 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022690 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022787 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022827 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022898 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023072 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127190 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127237 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127259 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127327 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127415 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127458 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127485 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127735 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128055 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128172 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128321 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.131929 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.136658 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.138410 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.140558 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.140736 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.150851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.151337 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.152162 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.196117 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pqc2p" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.203044 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.204660 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.767624 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.768845 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773497 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6jml2" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773847 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.783861 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941557 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941587 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941664 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941820 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941994 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.942040 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043587 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043663 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.044372 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.044875 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045021 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045045 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.046621 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.046732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.048632 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.057908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.059294 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.061681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.070992 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.096673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.212471 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.213292 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kr4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jtkm9_openstack(e84731f4-eb22-429a-9712-7d5f9504ae03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.215425 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" podUID="e84731f4-eb22-429a-9712-7d5f9504ae03" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.228592 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.228728 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lsf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9w7m2_openstack(6eec043b-32d8-4528-9369-405ae0b99e7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.229982 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" podUID="6eec043b-32d8-4528-9369-405ae0b99e7e" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.191285 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.192213 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.345385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"e84731f4-eb22-429a-9712-7d5f9504ae03\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.345860 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"e84731f4-eb22-429a-9712-7d5f9504ae03\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346156 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346659 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346870 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config" (OuterVolumeSpecName: "config") pod "e84731f4-eb22-429a-9712-7d5f9504ae03" (UID: "e84731f4-eb22-429a-9712-7d5f9504ae03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347228 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347334 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347444 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config" (OuterVolumeSpecName: "config") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.355026 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r" (OuterVolumeSpecName: "kube-api-access-7kr4r") pod "e84731f4-eb22-429a-9712-7d5f9504ae03" (UID: "e84731f4-eb22-429a-9712-7d5f9504ae03"). InnerVolumeSpecName "kube-api-access-7kr4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.355135 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2" (OuterVolumeSpecName: "kube-api-access-6lsf2") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "kube-api-access-6lsf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449654 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449718 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449738 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.552920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.563164 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.731817 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.736918 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4aa0600_fb12_4641_96a3_26cb56853bd3.slice/crio-c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b WatchSource:0}: Error finding container c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b: Status 404 returned error can't find the container with id c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.738448 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2f32a2_792f_4f23_b2a5_fd50a1e1373a.slice/crio-ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110 WatchSource:0}: Error finding container ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110: Status 404 returned error can't find the container with id ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.740142 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.825003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" event={"ID":"e84731f4-eb22-429a-9712-7d5f9504ae03","Type":"ContainerDied","Data":"d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.825120 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.835419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerStarted","Data":"c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.839298 5039 generic.go:334] "Generic (PLEG): container finished" podID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerID="fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8" exitCode=0 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.839365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.840798 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.842727 5039 generic.go:334] "Generic (PLEG): container finished" podID="a7a82611-9333-424b-9772-93de691cc191" containerID="ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621" exitCode=0 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.842816 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.845779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.847258 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerStarted","Data":"b53ad32cffda3e64e7114afbc8bd65ade81ee83922eb3d85365175d255be376d"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.849616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerStarted","Data":"cfd62b194c55a1c0929aedfd3e56c356bb03ea700fba1fdfbe1bc6d8d0871746"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.855393 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" event={"ID":"6eec043b-32d8-4528-9369-405ae0b99e7e","Type":"ContainerDied","Data":"bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.856383 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.869501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.900662 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f02ddf_62c8_49b8_8e86_d6b87c61172b.slice/crio-fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225 WatchSource:0}: Error finding container fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225: Status 404 returned error can't find the container with id fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.904622 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.916058 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.985894 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.997130 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.002267 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:23:04 crc kubenswrapper[5039]: W0130 13:23:04.099121 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953eeac5_b943_4036_be33_58eb347c04ef.slice/crio-ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0 WatchSource:0}: Error finding container ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0: Status 404 returned error can't find the container with id ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0 Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.111130 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eec043b-32d8-4528-9369-405ae0b99e7e" path="/var/lib/kubelet/pods/6eec043b-32d8-4528-9369-405ae0b99e7e/volumes" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.111483 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84731f4-eb22-429a-9712-7d5f9504ae03" path="/var/lib/kubelet/pods/e84731f4-eb22-429a-9712-7d5f9504ae03/volumes" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.825288 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.878824 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerStarted","Data":"86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.878930 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.891484 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.894941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.900538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.901953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.902007 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" podStartSLOduration=3.334495189 podStartE2EDuration="21.901991598s" podCreationTimestamp="2026-01-30 13:22:43 +0000 UTC" firstStartedPulling="2026-01-30 13:22:44.54927342 +0000 UTC m=+1129.209954647" lastFinishedPulling="2026-01-30 13:23:03.116769829 +0000 UTC m=+1147.777451056" observedRunningTime="2026-01-30 13:23:04.899867302 +0000 UTC m=+1149.560548539" watchObservedRunningTime="2026-01-30 13:23:04.901991598 +0000 UTC m=+1149.562672825" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.903900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.911464 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerStarted","Data":"71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.912163 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.938049 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" podStartSLOduration=2.79158952 podStartE2EDuration="20.938028047s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:44.964710774 +0000 UTC m=+1129.625392011" lastFinishedPulling="2026-01-30 13:23:03.111149311 +0000 UTC m=+1147.771830538" observedRunningTime="2026-01-30 13:23:04.935128721 +0000 UTC m=+1149.595809948" watchObservedRunningTime="2026-01-30 13:23:04.938028047 +0000 UTC m=+1149.598709294" Jan 30 13:23:05 crc kubenswrapper[5039]: I0130 13:23:05.921618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"414bac68c45351f838e0a511be6c7599d1e6e148cb6534c66df26f8dabdc82e1"} Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.605694 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.607097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.609713 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.631774 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731374 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731870 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731888 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731926 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.761042 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838363 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.839096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.839631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.840086 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.842079 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.845876 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.855468 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.858136 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.864945 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.865788 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.885124 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.935230 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" containerID="cri-o://86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" gracePeriod=10 Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941495 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.945372 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.008965 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.010138 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" containerID="cri-o://71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" gracePeriod=10 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.031678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.040243 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.042089 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.042632 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043003 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043080 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043182 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.045203 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.045852 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.046598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.062417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144913 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.145071 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246825 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246866 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246911 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246941 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246985 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.247913 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.247955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.248027 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.249079 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.261778 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.263040 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.503965 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.946491 5039 generic.go:334] "Generic (PLEG): container finished" podID="ffe59186-82c9-4825-98af-a345318afc40" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.946582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.951129 5039 generic.go:334] "Generic (PLEG): container finished" podID="a7a82611-9333-424b-9772-93de691cc191" containerID="86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.951231 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.953690 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.953834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerDied","Data":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.957028 5039 generic.go:334] "Generic (PLEG): container finished" podID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerID="71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.957053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.673130 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.683412 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.873903 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.873976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874268 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.889818 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9" (OuterVolumeSpecName: "kube-api-access-zvpl9") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "kube-api-access-zvpl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.897179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj" (OuterVolumeSpecName: "kube-api-access-hqprj") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "kube-api-access-hqprj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.921445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config" (OuterVolumeSpecName: "config") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.930870 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config" (OuterVolumeSpecName: "config") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.942888 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.956302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966363 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966391 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966410 5039 scope.go:117] "RemoveContainer" containerID="71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.970595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.970675 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976090 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976198 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976539 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976556 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976569 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976580 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.015212 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.033264 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.041530 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.056730 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.196479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.278456 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.406350 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.456228 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83141ea_dc8c_4ebc_bd18_0e30557f7b1b.slice/crio-6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa WatchSource:0}: Error finding container 6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa: Status 404 returned error can't find the container with id 6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.461966 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66d95ec_ff37_4cc2_a076_e53cc7713582.slice/crio-009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac WatchSource:0}: Error finding container 009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac: Status 404 returned error can't find the container with id 009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.466978 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode976e524_ebac_499e_abdb_2a35d1cd1c86.slice/crio-b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1 WatchSource:0}: Error finding container b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1: Status 404 returned error can't find the container with id b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1 Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.478080 5039 scope.go:117] "RemoveContainer" containerID="fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.562156 5039 scope.go:117] "RemoveContainer" containerID="86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.662562 5039 scope.go:117] "RemoveContainer" containerID="ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.982700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerStarted","Data":"ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120"} Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.983101 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.996070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05"} Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.996112 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.006691 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.008170 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.110983804 podStartE2EDuration="22.008148474s" podCreationTimestamp="2026-01-30 13:22:48 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.596161088 +0000 UTC m=+1148.256842315" lastFinishedPulling="2026-01-30 13:23:08.493325758 +0000 UTC m=+1153.154006985" observedRunningTime="2026-01-30 13:23:10.0026807 +0000 UTC m=+1154.663361927" watchObservedRunningTime="2026-01-30 13:23:10.008148474 +0000 UTC m=+1154.668829701" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.009720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerStarted","Data":"6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.017204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.030467 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.042058 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerStarted","Data":"009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.044750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.046771 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.050313752 podStartE2EDuration="20.046758491s" podCreationTimestamp="2026-01-30 13:22:50 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.597875593 +0000 UTC m=+1148.258556820" lastFinishedPulling="2026-01-30 13:23:09.594320332 +0000 UTC m=+1154.255001559" observedRunningTime="2026-01-30 13:23:10.04669828 +0000 UTC m=+1154.707379517" watchObservedRunningTime="2026-01-30 13:23:10.046758491 +0000 UTC m=+1154.707439718" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.052415 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.060146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.085387 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.183868012 podStartE2EDuration="25.085363519s" podCreationTimestamp="2026-01-30 13:22:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:47.209585482 +0000 UTC m=+1131.870266709" lastFinishedPulling="2026-01-30 13:23:03.111080969 +0000 UTC m=+1147.771762216" observedRunningTime="2026-01-30 13:23:10.075950881 +0000 UTC m=+1154.736632118" watchObservedRunningTime="2026-01-30 13:23:10.085363519 +0000 UTC m=+1154.746044746" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.109258 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a82611-9333-424b-9772-93de691cc191" path="/var/lib/kubelet/pods/a7a82611-9333-424b-9772-93de691cc191/volumes" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.109901 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" path="/var/lib/kubelet/pods/f5cc8ebd-9337-4caa-89f3-546dd8bc31de/volumes" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.124372 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.124345185 podStartE2EDuration="23.124345185s" podCreationTimestamp="2026-01-30 13:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:10.11465282 +0000 UTC m=+1154.775334067" watchObservedRunningTime="2026-01-30 13:23:10.124345185 +0000 UTC m=+1154.785026422" Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.076767 5039 generic.go:334] "Generic (PLEG): container finished" podID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerID="8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.077118 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.081110 5039 generic.go:334] "Generic (PLEG): container finished" podID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.082496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.088504 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.088667 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.092080 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerStarted","Data":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.095288 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerStarted","Data":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.160001 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sqvrc" podStartSLOduration=10.992627609 podStartE2EDuration="16.159977547s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.73974578 +0000 UTC m=+1148.400427007" lastFinishedPulling="2026-01-30 13:23:08.907095718 +0000 UTC m=+1153.567776945" observedRunningTime="2026-01-30 13:23:11.158048806 +0000 UTC m=+1155.818730063" watchObservedRunningTime="2026-01-30 13:23:11.159977547 +0000 UTC m=+1155.820658784" Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.205077 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sqvrc" Jan 30 13:23:16 crc kubenswrapper[5039]: I0130 13:23:16.875555 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 13:23:16 crc kubenswrapper[5039]: I0130 13:23:16.877153 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.401888 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.402618 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.530834 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.700090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.164067 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.167176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerStarted","Data":"c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.170679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerStarted","Data":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.172138 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176090 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176171 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176339 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.178459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.180020 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.192966 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.865471691 podStartE2EDuration="25.192943257s" podCreationTimestamp="2026-01-30 13:22:54 +0000 UTC" firstStartedPulling="2026-01-30 13:23:05.06073143 +0000 UTC m=+1149.721412657" lastFinishedPulling="2026-01-30 13:23:18.388202996 +0000 UTC m=+1163.048884223" observedRunningTime="2026-01-30 13:23:19.182517072 +0000 UTC m=+1163.843198319" watchObservedRunningTime="2026-01-30 13:23:19.192943257 +0000 UTC m=+1163.853624514" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.212280 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.699312678 podStartE2EDuration="24.212259225s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.905036674 +0000 UTC m=+1148.565717911" lastFinishedPulling="2026-01-30 13:23:18.417983231 +0000 UTC m=+1163.078664458" observedRunningTime="2026-01-30 13:23:19.203166766 +0000 UTC m=+1163.863848043" watchObservedRunningTime="2026-01-30 13:23:19.212259225 +0000 UTC m=+1163.872940462" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.228638 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t7hh5" podStartSLOduration=4.292804433 podStartE2EDuration="13.228617866s" podCreationTimestamp="2026-01-30 13:23:06 +0000 UTC" firstStartedPulling="2026-01-30 13:23:09.463472525 +0000 UTC m=+1154.124153752" lastFinishedPulling="2026-01-30 13:23:18.399285958 +0000 UTC m=+1163.059967185" observedRunningTime="2026-01-30 13:23:19.226718656 +0000 UTC m=+1163.887399903" watchObservedRunningTime="2026-01-30 13:23:19.228617866 +0000 UTC m=+1163.889299103" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.275797 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" podStartSLOduration=13.275778529 podStartE2EDuration="13.275778529s" podCreationTimestamp="2026-01-30 13:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:19.262708845 +0000 UTC m=+1163.923390102" watchObservedRunningTime="2026-01-30 13:23:19.275778529 +0000 UTC m=+1163.936459776" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.311557 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-z6nkm" podStartSLOduration=19.757941992 podStartE2EDuration="24.311539301s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:04.101609253 +0000 UTC m=+1148.762290480" lastFinishedPulling="2026-01-30 13:23:08.655206552 +0000 UTC m=+1153.315887789" observedRunningTime="2026-01-30 13:23:19.300339486 +0000 UTC m=+1163.961020723" watchObservedRunningTime="2026-01-30 13:23:19.311539301 +0000 UTC m=+1163.972220528" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.324314 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.329167 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-7m45s" podStartSLOduration=12.329156015 podStartE2EDuration="12.329156015s" podCreationTimestamp="2026-01-30 13:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:19.326378752 +0000 UTC m=+1163.987059979" watchObservedRunningTime="2026-01-30 13:23:19.329156015 +0000 UTC m=+1163.989837242" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.768498 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.826488 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193193 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193209 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.239421 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.462424 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.502593 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.502951 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.502974 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503046 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503086 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503092 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503105 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503113 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503282 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503314 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.504228 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.505374 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.532403 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689125 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689173 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689311 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689379 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790528 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790598 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.792415 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.809187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.826706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.096801 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.150496 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.201415 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.244903 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.327156 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.444176 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.445676 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.449872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450503 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450603 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rnpln" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450707 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.456063 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.521399 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.533958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.536334 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.543752 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.544051 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-q8wbr" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.544736 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.570430 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603127 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603420 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603673 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705334 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705688 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705763 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705800 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705846 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706036 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706088 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706436 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706984 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706994 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.709534 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.709760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.711597 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.711949 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.725153 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.778867 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.785291 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807872 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807932 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.808066 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.808377 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809555 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809589 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809640 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:22.309622395 +0000 UTC m=+1166.970303612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.810250 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.811536 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.818865 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.829051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.842365 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.033580 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.034947 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.038478 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.038545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.039807 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.042050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115682 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115722 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115764 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115795 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115890 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115949 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211045 5039 generic.go:334] "Generic (PLEG): container finished" podID="46226e88-9d62-4d6f-a009-ed620de5e723" containerID="c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f" exitCode=0 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211108 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f"} Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerStarted","Data":"e1528364e7751cb7c328a7866fec171c18aae97021ba92ae46488b104ead34c1"} Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211733 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" containerID="cri-o://6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" gracePeriod=10 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217162 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217250 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217277 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217318 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217360 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217453 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218122 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218533 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218703 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.225592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.225729 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.226152 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.233705 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.258029 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.318832 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319500 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319519 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319555 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:23.319541826 +0000 UTC m=+1167.980223053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.358111 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.626260 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.746692 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.746811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.747033 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.747174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.751939 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8" (OuterVolumeSpecName: "kube-api-access-7tps8") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "kube-api-access-7tps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.782678 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.791407 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config" (OuterVolumeSpecName: "config") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.800133 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.834758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: W0130 13:23:22.837069 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7db6f42_583a_450d_b142_ec7c5ae4eee0.slice/crio-4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3 WatchSource:0}: Error finding container 4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3: Status 404 returned error can't find the container with id 4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849147 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849176 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849185 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849202 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.221142 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerStarted","Data":"4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.222520 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"6eb99b8efc985784fe2897360ff7becef50a7e77036fc7511f352a6d9ddaf281"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224427 5039 generic.go:334] "Generic (PLEG): container finished" podID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" exitCode=0 Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224510 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224577 5039 scope.go:117] "RemoveContainer" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224609 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.232301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerStarted","Data":"d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.232390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.255646 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podStartSLOduration=3.255630238 podStartE2EDuration="3.255630238s" podCreationTimestamp="2026-01-30 13:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:23.253498051 +0000 UTC m=+1167.914179298" watchObservedRunningTime="2026-01-30 13:23:23.255630238 +0000 UTC m=+1167.916311465" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.272197 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.278280 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.311248 5039 scope.go:117] "RemoveContainer" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330218 5039 scope.go:117] "RemoveContainer" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.330606 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": container with ID starting with 6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b not found: ID does not exist" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330647 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} err="failed to get container status \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": rpc error: code = NotFound desc = could not find container \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": container with ID starting with 6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b not found: ID does not exist" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330711 5039 scope.go:117] "RemoveContainer" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.331084 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": container with ID starting with 947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7 not found: ID does not exist" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.331144 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7"} err="failed to get container status \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": rpc error: code = NotFound desc = could not find container \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": container with ID starting with 947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7 not found: ID does not exist" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.356513 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357058 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357087 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357143 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:25.357123765 +0000 UTC m=+1170.017805042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.105706 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" path="/var/lib/kubelet/pods/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b/volumes" Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239863 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239959 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.263425 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.13618815 podStartE2EDuration="3.263405649s" podCreationTimestamp="2026-01-30 13:23:21 +0000 UTC" firstStartedPulling="2026-01-30 13:23:22.245661718 +0000 UTC m=+1166.906342945" lastFinishedPulling="2026-01-30 13:23:23.372879207 +0000 UTC m=+1168.033560444" observedRunningTime="2026-01-30 13:23:24.25598676 +0000 UTC m=+1168.916667997" watchObservedRunningTime="2026-01-30 13:23:24.263405649 +0000 UTC m=+1168.924086876" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.387429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388314 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388409 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388523 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:29.388505339 +0000 UTC m=+1174.049186566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626158 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.626557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626574 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.626597 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="init" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626604 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="init" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626788 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.627427 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.629374 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.633646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.794750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.794821 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.896196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.896375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.897654 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.916253 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.971130 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:26 crc kubenswrapper[5039]: I0130 13:23:26.977479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:26 crc kubenswrapper[5039]: W0130 13:23:26.979251 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f6622a1_348d_45b9_b04f_93c20ada9ad0.slice/crio-e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3 WatchSource:0}: Error finding container e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3: Status 404 returned error can't find the container with id e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3 Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.263227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerStarted","Data":"f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.263532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerStarted","Data":"e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.264713 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerStarted","Data":"efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.282304 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-g7w7q" podStartSLOduration=2.282287821 podStartE2EDuration="2.282287821s" podCreationTimestamp="2026-01-30 13:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:27.280082262 +0000 UTC m=+1171.940763499" watchObservedRunningTime="2026-01-30 13:23:27.282287821 +0000 UTC m=+1171.942969048" Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.294200 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6fssn" podStartSLOduration=2.487124735 podStartE2EDuration="6.294180849s" podCreationTimestamp="2026-01-30 13:23:21 +0000 UTC" firstStartedPulling="2026-01-30 13:23:22.839730513 +0000 UTC m=+1167.500411730" lastFinishedPulling="2026-01-30 13:23:26.646786617 +0000 UTC m=+1171.307467844" observedRunningTime="2026-01-30 13:23:27.29272486 +0000 UTC m=+1171.953406077" watchObservedRunningTime="2026-01-30 13:23:27.294180849 +0000 UTC m=+1171.954862076" Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.506200 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.278223 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.280256 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.281831 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerID="f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8" exitCode=0 Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.282837 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerDied","Data":"f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8"} Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.282890 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.383258 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.384337 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.387467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.399250 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.437558 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.437662 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.538870 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.538940 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.539201 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.539276 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.540030 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.560909 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.603788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.631981 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.633369 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.655103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.655292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.656518 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.666216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.676450 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.677440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.681204 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.684305 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.706274 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.708413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.756323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.756370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.836610 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.840810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.853217 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859930 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.860025 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.861268 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.879223 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.949196 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.950302 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.952695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.959031 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963585 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963688 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963730 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.964813 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.978484 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.037001 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4461ebd9_1119_41a1_94c8_cc453e06c2f3.slice/crio-078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6 WatchSource:0}: Error finding container 078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6: Status 404 returned error can't find the container with id 078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6 Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.037705 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065948 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065983 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.066714 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.083223 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.084403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.090649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.198165 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.199730 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.199770 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.200599 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.223269 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.248001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.251349 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce80998_c4b6_49af_b37b_5ed6a510b704.slice/crio-5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf WatchSource:0}: Error finding container 5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf: Status 404 returned error can't find the container with id 5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.264272 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.297575 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerStarted","Data":"5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.300069 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerStarted","Data":"e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.300106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerStarted","Data":"078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.323562 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-frc4f" podStartSLOduration=1.323509448 podStartE2EDuration="1.323509448s" podCreationTimestamp="2026-01-30 13:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:29.320268611 +0000 UTC m=+1173.980949838" watchObservedRunningTime="2026-01-30 13:23:29.323509448 +0000 UTC m=+1173.984190685" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.403068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403429 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403519 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403565 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:37.403551601 +0000 UTC m=+1182.064232828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.614523 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.712432 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.721827 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ed7c55_cfa8_44fe_94d1_3bc6232c6686.slice/crio-3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be WatchSource:0}: Error finding container 3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be: Status 404 returned error can't find the container with id 3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.820971 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.892319 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68dc52c3_d455_4a3d_b9fd_8aae22e9e7de.slice/crio-ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557 WatchSource:0}: Error finding container ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557: Status 404 returned error can't find the container with id ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557 Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.898709 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.946941 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.964181 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a3587a_d7dd_4007_aff8_acfcd399496f.slice/crio-2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c WatchSource:0}: Error finding container 2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c: Status 404 returned error can't find the container with id 2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.018090 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.018536 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.019270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6622a1-348d-45b9-b04f-93c20ada9ad0" (UID: "6f6622a1-348d-45b9-b04f-93c20ada9ad0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.025331 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw" (OuterVolumeSpecName: "kube-api-access-cd6kw") pod "6f6622a1-348d-45b9-b04f-93c20ada9ad0" (UID: "6f6622a1-348d-45b9-b04f-93c20ada9ad0"). InnerVolumeSpecName "kube-api-access-cd6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.120207 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.120242 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.314387 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerID="2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.314473 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerDied","Data":"2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerDied","Data":"e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317676 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317727 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323784 5039 generic.go:334] "Generic (PLEG): container finished" podID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerID="a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323878 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerDied","Data":"a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerStarted","Data":"ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330739 5039 generic.go:334] "Generic (PLEG): container finished" podID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerID="975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerDied","Data":"975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330821 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerStarted","Data":"3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.334158 5039 generic.go:334] "Generic (PLEG): container finished" podID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerID="e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.334201 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerDied","Data":"e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.335447 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerStarted","Data":"bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.335474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerStarted","Data":"2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337197 5039 generic.go:334] "Generic (PLEG): container finished" podID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerID="16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337221 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerDied","Data":"16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337234 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerStarted","Data":"3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.829319 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.899400 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.899662 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-7m45s" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" containerID="cri-o://05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" gracePeriod=10 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347116 5039 generic.go:334] "Generic (PLEG): container finished" podID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerID="05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" exitCode=0 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347388 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347431 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.349651 5039 generic.go:334] "Generic (PLEG): container finished" podID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerID="bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d" exitCode=0 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.349923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerDied","Data":"bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.423438 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554641 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554696 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554741 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554874 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.569079 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb" (OuterVolumeSpecName: "kube-api-access-xvtwb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "kube-api-access-xvtwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.606632 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.637432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.646140 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.652081 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config" (OuterVolumeSpecName: "config") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657278 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657447 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657552 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657690 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657789 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.746842 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860326 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860989 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a20c1e-b7d7-4f94-b313-58229c1c9d4e" (UID: "33a20c1e-b7d7-4f94-b313-58229c1c9d4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.864433 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x" (OuterVolumeSpecName: "kube-api-access-l665x") pod "33a20c1e-b7d7-4f94-b313-58229c1c9d4e" (UID: "33a20c1e-b7d7-4f94-b313-58229c1c9d4e"). InnerVolumeSpecName "kube-api-access-l665x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.915498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.943436 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.947468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.953593 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.962286 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.962324 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.969727 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.054959 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.063634 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"c0a3587a-d7dd-4007-aff8-acfcd399496f\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064200 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"6ce80998-c4b6-49af-b37b-5ed6a510b704\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064343 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"6ce80998-c4b6-49af-b37b-5ed6a510b704\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"c0a3587a-d7dd-4007-aff8-acfcd399496f\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064496 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064586 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064957 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ce80998-c4b6-49af-b37b-5ed6a510b704" (UID: "6ce80998-c4b6-49af-b37b-5ed6a510b704"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0a3587a-d7dd-4007-aff8-acfcd399496f" (UID: "c0a3587a-d7dd-4007-aff8-acfcd399496f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.065449 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4461ebd9-1119-41a1-94c8-cc453e06c2f3" (UID: "4461ebd9-1119-41a1-94c8-cc453e06c2f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.065723 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" (UID: "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.066539 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.067664 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2" (OuterVolumeSpecName: "kube-api-access-bchl2") pod "6ce80998-c4b6-49af-b37b-5ed6a510b704" (UID: "6ce80998-c4b6-49af-b37b-5ed6a510b704"). InnerVolumeSpecName "kube-api-access-bchl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2" (OuterVolumeSpecName: "kube-api-access-bb6q2") pod "c0a3587a-d7dd-4007-aff8-acfcd399496f" (UID: "c0a3587a-d7dd-4007-aff8-acfcd399496f"). InnerVolumeSpecName "kube-api-access-bb6q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068627 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2" (OuterVolumeSpecName: "kube-api-access-b57c2") pod "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" (UID: "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686"). InnerVolumeSpecName "kube-api-access-b57c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068744 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45" (OuterVolumeSpecName: "kube-api-access-2ct45") pod "4461ebd9-1119-41a1-94c8-cc453e06c2f3" (UID: "4461ebd9-1119-41a1-94c8-cc453e06c2f3"). InnerVolumeSpecName "kube-api-access-2ct45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.102994 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" path="/var/lib/kubelet/pods/6f6622a1-348d-45b9-b04f-93c20ada9ad0/volumes" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.165642 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.165698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166077 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166095 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166108 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166120 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166130 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166140 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166153 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166164 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" (UID: "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.169928 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr" (OuterVolumeSpecName: "kube-api-access-28rbr") pod "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" (UID: "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de"). InnerVolumeSpecName "kube-api-access-28rbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.267224 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.267259 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: E0130 13:23:32.269145 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce80998_c4b6_49af_b37b_5ed6a510b704.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a3587a_d7dd_4007_aff8_acfcd399496f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a20c1e_b7d7_4f94_b313_58229c1c9d4e.slice/crio-3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode976e524_ebac_499e_abdb_2a35d1cd1c86.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a20c1e_b7d7_4f94_b313_58229c1c9d4e.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.368642 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.368711 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerDied","Data":"3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.369423 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370660 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerDied","Data":"5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370689 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370722 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerDied","Data":"ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372491 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372466 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.374933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerDied","Data":"3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.374968 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.375176 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.377645 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.378310 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerDied","Data":"078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383419 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383622 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383670 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.384210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerDied","Data":"2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.385503 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.431510 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.440516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.105620 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" path="/var/lib/kubelet/pods/e976e524-ebac-499e-abdb-2a35d1cd1c86/volumes" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.200921 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201371 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201384 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201392 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201409 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201426 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="init" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="init" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201447 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201455 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201481 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201489 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201509 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201516 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201527 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201536 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201548 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201554 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201728 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201748 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201759 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201770 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201785 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201794 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201807 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.202444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.204540 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.205961 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.215304 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305415 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305789 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305865 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.403525 5039 generic.go:334] "Generic (PLEG): container finished" podID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerID="efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2" exitCode=0 Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.403582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerDied","Data":"efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2"} Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408735 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408807 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408921 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.416382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.418608 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.419094 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.445875 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.516584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.029983 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.412791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerStarted","Data":"f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b"} Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.738468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935791 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935923 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936056 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.937057 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.937127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.951282 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8" (OuterVolumeSpecName: "kube-api-access-v7gp8") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "kube-api-access-v7gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.954239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.955162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts" (OuterVolumeSpecName: "scripts") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.960199 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.972474 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038334 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038531 5039 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038606 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038660 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038713 5039 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038804 5039 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038858 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.421612 5039 generic.go:334] "Generic (PLEG): container finished" podID="31674257-f143-40ab-97b9-dbf3153277c3" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" exitCode=0 Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.421729 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.425231 5039 generic.go:334] "Generic (PLEG): container finished" podID="106954f5-3ea7-4564-8479-407ef02320b7" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" exitCode=0 Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.425317 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerDied","Data":"4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427198 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427249 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.061659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:37 crc kubenswrapper[5039]: E0130 13:23:37.062077 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062099 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062295 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062852 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.072483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.072794 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.082929 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.083343 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187158 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187895 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.223531 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.397080 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.436825 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.490396 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.495176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.754983 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.903429 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.233635 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:38 crc kubenswrapper[5039]: W0130 13:23:38.242412 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ada089a_5096_4658_829e_46ed96867c7e.slice/crio-fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e WatchSource:0}: Error finding container fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e: Status 404 returned error can't find the container with id fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.459652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerStarted","Data":"8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.459706 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerStarted","Data":"00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.465895 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.473966 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.474294 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.493775 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cflr2" podStartSLOduration=1.4937533969999999 podStartE2EDuration="1.493753397s" podCreationTimestamp="2026-01-30 13:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:38.490824979 +0000 UTC m=+1183.151506206" watchObservedRunningTime="2026-01-30 13:23:38.493753397 +0000 UTC m=+1183.154434624" Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.577251 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.573724953 podStartE2EDuration="54.577231672s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:46.048161755 +0000 UTC m=+1130.708842982" lastFinishedPulling="2026-01-30 13:23:03.051668474 +0000 UTC m=+1147.712349701" observedRunningTime="2026-01-30 13:23:38.549800888 +0000 UTC m=+1183.210482145" watchObservedRunningTime="2026-01-30 13:23:38.577231672 +0000 UTC m=+1183.237912899" Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.499184 5039 generic.go:334] "Generic (PLEG): container finished" podID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerID="8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148" exitCode=0 Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.499598 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerDied","Data":"8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148"} Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.525130 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.273552096 podStartE2EDuration="55.525112769s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:45.859647709 +0000 UTC m=+1130.520328946" lastFinishedPulling="2026-01-30 13:23:03.111208392 +0000 UTC m=+1147.771889619" observedRunningTime="2026-01-30 13:23:38.580829809 +0000 UTC m=+1183.241511036" watchObservedRunningTime="2026-01-30 13:23:39.525112769 +0000 UTC m=+1184.185793996" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.509808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.510471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.510486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.765769 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.844762 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.844944 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.846394 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" (UID: "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.849878 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m" (OuterVolumeSpecName: "kube-api-access-f8b7m") pod "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" (UID: "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f"). InnerVolumeSpecName "kube-api-access-f8b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.947205 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.947246 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.254361 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:41 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:41 crc kubenswrapper[5039]: > Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.523983 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617"} Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527277 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527281 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerDied","Data":"00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33"} Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527701 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.844855 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 13:23:45 crc kubenswrapper[5039]: I0130 13:23:45.635749 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:23:46 crc kubenswrapper[5039]: I0130 13:23:46.236727 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:46 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:46 crc kubenswrapper[5039]: > Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.588843 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerStarted","Data":"bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.610691 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hpk2s" podStartSLOduration=1.673109113 podStartE2EDuration="14.61067108s" podCreationTimestamp="2026-01-30 13:23:34 +0000 UTC" firstStartedPulling="2026-01-30 13:23:35.038691308 +0000 UTC m=+1179.699372535" lastFinishedPulling="2026-01-30 13:23:47.976253275 +0000 UTC m=+1192.636934502" observedRunningTime="2026-01-30 13:23:48.603293523 +0000 UTC m=+1193.263974760" watchObservedRunningTime="2026-01-30 13:23:48.61067108 +0000 UTC m=+1193.271352307" Jan 30 13:23:49 crc kubenswrapper[5039]: I0130 13:23:49.615981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2"} Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.251404 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:51 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:51 crc kubenswrapper[5039]: > Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.251879 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.276477 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.517919 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:51 crc kubenswrapper[5039]: E0130 13:23:51.520205 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.520227 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.520386 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.521005 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.538603 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.543684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.646852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.646927 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647134 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647201 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647302 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.748935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749096 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749152 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749252 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749486 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749522 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749552 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.750051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.751313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.767872 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.845290 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:52 crc kubenswrapper[5039]: I0130 13:23:52.645647 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:52 crc kubenswrapper[5039]: W0130 13:23:52.654555 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096dbf05_3d5b_45e8_8087_edefd10c1ea0.slice/crio-dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8 WatchSource:0}: Error finding container dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8: Status 404 returned error can't find the container with id dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8 Jan 30 13:23:53 crc kubenswrapper[5039]: I0130 13:23:53.662141 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerStarted","Data":"dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.678995 5039 generic.go:334] "Generic (PLEG): container finished" podID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerID="25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22" exitCode=0 Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.679178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerDied","Data":"25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.686999 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687066 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687074 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e"} Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.357535 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.637231 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.700390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc"} Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.744611 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.747166 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.760573 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.811854 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.814585 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.830097 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.830197 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.833344 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.908934 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.910147 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.911869 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.929692 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932525 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932560 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932659 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.933790 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.955320 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.020230 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.021306 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.025946 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.033768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034277 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034309 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034330 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.035005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.063940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.079231 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.130924 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.131982 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.137569 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.137787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.164081 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.170487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.229599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.238606 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.238645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.248447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.248609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.250550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.261501 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.266850 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271391 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271497 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271627 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.277722 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.315812 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.324685 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.334257 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sqvrc" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.346444 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.347539 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350110 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350135 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350209 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.352263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.353980 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.366993 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.380827 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.392377 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451427 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451460 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451511 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451547 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451778 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451821 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.452151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.455396 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.455441 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456346 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456633 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts" (OuterVolumeSpecName: "scripts") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456747 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.457124 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run" (OuterVolumeSpecName: "var-run") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.464459 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8" (OuterVolumeSpecName: "kube-api-access-pvzw8") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "kube-api-access-pvzw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.468698 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.513681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555467 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555544 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555644 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555655 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555664 5039 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555673 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555681 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555689 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.556233 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.577540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.588392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.640162 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:56 crc kubenswrapper[5039]: W0130 13:23:56.649078 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a51040a_32e7_43d3_8fd2_8ce22ac5dde6.slice/crio-a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b WatchSource:0}: Error finding container a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b: Status 404 returned error can't find the container with id a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.694083 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.710985 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.724772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerStarted","Data":"a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.730754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763782 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerDied","Data":"dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763830 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763895 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.910360 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:56 crc kubenswrapper[5039]: W0130 13:23:56.966374 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45c105ac_a6f3_40f4_8543_3d8fe84f6132.slice/crio-db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99 WatchSource:0}: Error finding container db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99: Status 404 returned error can't find the container with id db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99 Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.983546 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.115114 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.169327 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.233624 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:57 crc kubenswrapper[5039]: W0130 13:23:57.258203 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf73f9b07_439c_418f_a04a_bc0aae17e21a.slice/crio-991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f WatchSource:0}: Error finding container 991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f: Status 404 returned error can't find the container with id 991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.278304 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:57 crc kubenswrapper[5039]: W0130 13:23:57.319779 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd14a598e_e058_4b9d_8d57_6f0db418de2c.slice/crio-7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a WatchSource:0}: Error finding container 7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a: Status 404 returned error can't find the container with id 7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.338677 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.472705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.480106 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.593695 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:57 crc kubenswrapper[5039]: E0130 13:23:57.593992 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594004 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594204 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594674 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.596838 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.617412 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680902 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680946 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681080 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681154 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681248 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783713 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783761 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783804 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.784986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.785810 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.785878 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.786508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.787125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.819315 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.820075 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.833971 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerStarted","Data":"9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.834060 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerStarted","Data":"196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.838850 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerStarted","Data":"ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.838903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerStarted","Data":"db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.845582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerStarted","Data":"b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.845631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerStarted","Data":"991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.849638 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerStarted","Data":"760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.849867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerStarted","Data":"af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.852474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerStarted","Data":"9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.852608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerStarted","Data":"1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.855516 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerID="4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b" exitCode=0 Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.855762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerDied","Data":"4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.857380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerStarted","Data":"7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.887898 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.392025593 podStartE2EDuration="37.887878082s" podCreationTimestamp="2026-01-30 13:23:20 +0000 UTC" firstStartedPulling="2026-01-30 13:23:38.244240807 +0000 UTC m=+1182.904922034" lastFinishedPulling="2026-01-30 13:23:53.740093256 +0000 UTC m=+1198.400774523" observedRunningTime="2026-01-30 13:23:57.867425164 +0000 UTC m=+1202.528106401" watchObservedRunningTime="2026-01-30 13:23:57.887878082 +0000 UTC m=+1202.548559299" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.907369 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.908033 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0596-account-create-update-nklv5" podStartSLOduration=2.908002491 podStartE2EDuration="2.908002491s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.90347496 +0000 UTC m=+1202.564156187" watchObservedRunningTime="2026-01-30 13:23:57.908002491 +0000 UTC m=+1202.568683718" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.927996 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-fae2-account-create-update-l2z9v" podStartSLOduration=1.927975496 podStartE2EDuration="1.927975496s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.919623252 +0000 UTC m=+1202.580304479" watchObservedRunningTime="2026-01-30 13:23:57.927975496 +0000 UTC m=+1202.588656723" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.939605 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-pptnb" podStartSLOduration=2.9395857960000003 podStartE2EDuration="2.939585796s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.930555505 +0000 UTC m=+1202.591236732" watchObservedRunningTime="2026-01-30 13:23:57.939585796 +0000 UTC m=+1202.600267023" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.953681 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jtpkf" podStartSLOduration=1.9536604130000002 podStartE2EDuration="1.953660413s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.949323697 +0000 UTC m=+1202.610004924" watchObservedRunningTime="2026-01-30 13:23:57.953660413 +0000 UTC m=+1202.614341640" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.978611 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-6646-account-create-update-wpkcq" podStartSLOduration=2.978590521 podStartE2EDuration="2.978590521s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.968778518 +0000 UTC m=+1202.629459745" watchObservedRunningTime="2026-01-30 13:23:57.978590521 +0000 UTC m=+1202.639271748" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.107965 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" path="/var/lib/kubelet/pods/096dbf05-3d5b-45e8-8087-edefd10c1ea0/volumes" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.194383 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.195801 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.198377 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.220767 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292880 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292983 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394389 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394412 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395220 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395446 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395751 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.396406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.411468 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.422926 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.520516 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.868971 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerStarted","Data":"4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.870137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerStarted","Data":"6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.871001 5039 generic.go:334] "Generic (PLEG): container finished" podID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerID="b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.871085 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerDied","Data":"b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.873042 5039 generic.go:334] "Generic (PLEG): container finished" podID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerID="760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.873091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerDied","Data":"760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.874428 5039 generic.go:334] "Generic (PLEG): container finished" podID="20bee34b-7616-41d8-8761-12c09c8523e3" containerID="9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.874552 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerDied","Data":"9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.882317 5039 generic.go:334] "Generic (PLEG): container finished" podID="34b4ac27-da03-43e8-874d-7feb1000f162" containerID="9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.882403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerDied","Data":"9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.890278 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sqvrc-config-6xgp8" podStartSLOduration=1.890260258 podStartE2EDuration="1.890260258s" podCreationTimestamp="2026-01-30 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:58.888303816 +0000 UTC m=+1203.548985043" watchObservedRunningTime="2026-01-30 13:23:58.890260258 +0000 UTC m=+1203.550941485" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.891383 5039 generic.go:334] "Generic (PLEG): container finished" podID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerID="ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.891565 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerDied","Data":"ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.069029 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:59 crc kubenswrapper[5039]: W0130 13:23:59.088140 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26283c79_2aa3_464b_b265_4650000a980b.slice/crio-b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699 WatchSource:0}: Error finding container b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699: Status 404 returned error can't find the container with id b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.287302 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.428135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.429387 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.434416 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" (UID: "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.434674 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9" (OuterVolumeSpecName: "kube-api-access-tlwx9") pod "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" (UID: "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6"). InnerVolumeSpecName "kube-api-access-tlwx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.443098 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.443139 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.908767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.908896 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerDied","Data":"a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.909196 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.916666 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerID="4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e" exitCode=0 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.916755 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerDied","Data":"4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927591 5039 generic.go:334] "Generic (PLEG): container finished" podID="26283c79-2aa3-464b-b265-4650000a980b" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" exitCode=0 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerStarted","Data":"b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.786151 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.818689 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.857002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.885468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.898630 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.910456 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936576 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"f73f9b07-439c-418f-a04a-bc0aae17e21a\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936615 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936632 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936618 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936648 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936670 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"34b4ac27-da03-43e8-874d-7feb1000f162\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"20bee34b-7616-41d8-8761-12c09c8523e3\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936716 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"55556e4d-2818-46de-b888-7a5be04f2a5c\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"34b4ac27-da03-43e8-874d-7feb1000f162\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936804 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"f73f9b07-439c-418f-a04a-bc0aae17e21a\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936841 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"20bee34b-7616-41d8-8761-12c09c8523e3\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"55556e4d-2818-46de-b888-7a5be04f2a5c\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937117 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run" (OuterVolumeSpecName: "var-run") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937505 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b4ac27-da03-43e8-874d-7feb1000f162" (UID: "34b4ac27-da03-43e8-874d-7feb1000f162"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20bee34b-7616-41d8-8761-12c09c8523e3" (UID: "20bee34b-7616-41d8-8761-12c09c8523e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937887 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f73f9b07-439c-418f-a04a-bc0aae17e21a" (UID: "f73f9b07-439c-418f-a04a-bc0aae17e21a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937914 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.938410 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939227 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939222 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45c105ac-a6f3-40f4-8543-3d8fe84f6132" (UID: "45c105ac-a6f3-40f4-8543-3d8fe84f6132"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939244 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939302 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939318 5039 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939332 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939344 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939357 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts" (OuterVolumeSpecName: "scripts") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939938 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55556e4d-2818-46de-b888-7a5be04f2a5c" (UID: "55556e4d-2818-46de-b888-7a5be04f2a5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.944754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm" (OuterVolumeSpecName: "kube-api-access-wptkm") pod "20bee34b-7616-41d8-8761-12c09c8523e3" (UID: "20bee34b-7616-41d8-8761-12c09c8523e3"). InnerVolumeSpecName "kube-api-access-wptkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.947237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv" (OuterVolumeSpecName: "kube-api-access-khfcv") pod "34b4ac27-da03-43e8-874d-7feb1000f162" (UID: "34b4ac27-da03-43e8-874d-7feb1000f162"). InnerVolumeSpecName "kube-api-access-khfcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.948848 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c" (OuterVolumeSpecName: "kube-api-access-tp55c") pod "f73f9b07-439c-418f-a04a-bc0aae17e21a" (UID: "f73f9b07-439c-418f-a04a-bc0aae17e21a"). InnerVolumeSpecName "kube-api-access-tp55c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.949428 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb" (OuterVolumeSpecName: "kube-api-access-xscgb") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "kube-api-access-xscgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.951865 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw" (OuterVolumeSpecName: "kube-api-access-wb7bw") pod "45c105ac-a6f3-40f4-8543-3d8fe84f6132" (UID: "45c105ac-a6f3-40f4-8543-3d8fe84f6132"). InnerVolumeSpecName "kube-api-access-wb7bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961721 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerDied","Data":"991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961763 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961828 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.966324 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl" (OuterVolumeSpecName: "kube-api-access-4fbjl") pod "55556e4d-2818-46de-b888-7a5be04f2a5c" (UID: "55556e4d-2818-46de-b888-7a5be04f2a5c"). InnerVolumeSpecName "kube-api-access-4fbjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.971710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerStarted","Data":"eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.975043 5039 generic.go:334] "Generic (PLEG): container finished" podID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerID="bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd" exitCode=0 Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.975117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerDied","Data":"bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977433 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerDied","Data":"db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977460 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977514 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993048 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerDied","Data":"6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993092 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993163 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.995838 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rdj8j" podStartSLOduration=1.766245936 podStartE2EDuration="7.995816636s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="2026-01-30 13:23:57.333095849 +0000 UTC m=+1201.993777076" lastFinishedPulling="2026-01-30 13:24:03.562666539 +0000 UTC m=+1208.223347776" observedRunningTime="2026-01-30 13:24:03.989763694 +0000 UTC m=+1208.650444921" watchObservedRunningTime="2026-01-30 13:24:03.995816636 +0000 UTC m=+1208.656497853" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998316 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerDied","Data":"1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998373 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998443 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerDied","Data":"af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001646 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001712 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerDied","Data":"196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009906 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009959 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.014451 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerStarted","Data":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.014718 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.039870 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" podStartSLOduration=6.039854775 podStartE2EDuration="6.039854775s" podCreationTimestamp="2026-01-30 13:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:04.03518996 +0000 UTC m=+1208.695871187" watchObservedRunningTime="2026-01-30 13:24:04.039854775 +0000 UTC m=+1208.700536002" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040349 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040377 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040390 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040402 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040415 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040426 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040437 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040450 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040461 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.011144 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.017923 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.397648 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.562858 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.562959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.563050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.563075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.569885 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.571983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff" (OuterVolumeSpecName: "kube-api-access-9xtff") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "kube-api-access-9xtff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.593730 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.640950 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data" (OuterVolumeSpecName: "config-data") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665466 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665520 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665545 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665563 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.035430 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerDied","Data":"f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b"} Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.036706 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.035531 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.128278 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" path="/var/lib/kubelet/pods/f4367f73-b9d4-4351-b1a2-94506c105b9d/volumes" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.543278 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.543495 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" containerID="cri-o://f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" gracePeriod=10 Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611275 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611949 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611962 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611974 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611989 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612025 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612031 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612046 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612051 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612069 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612075 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612086 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612106 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612115 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612121 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612312 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612337 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612347 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612358 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612368 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612376 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612385 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.613371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.636926 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813639 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813690 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813988 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.814031 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916894 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916928 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.919396 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.919542 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.937170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.990551 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017681 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017745 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017865 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.043772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.047233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz" (OuterVolumeSpecName: "kube-api-access-mrwpz") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "kube-api-access-mrwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.077182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079384 5039 generic.go:334] "Generic (PLEG): container finished" podID="26283c79-2aa3-464b-b265-4650000a980b" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" exitCode=0 Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699"} Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079485 5039 scope.go:117] "RemoveContainer" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079677 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.085239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.087294 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.097091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config" (OuterVolumeSpecName: "config") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.103932 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119895 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119923 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119938 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119951 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119964 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119977 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.170553 5039 scope.go:117] "RemoveContainer" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216233 5039 scope.go:117] "RemoveContainer" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: E0130 13:24:07.216523 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": container with ID starting with f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744 not found: ID does not exist" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216554 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} err="failed to get container status \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": rpc error: code = NotFound desc = could not find container \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": container with ID starting with f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744 not found: ID does not exist" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216578 5039 scope.go:117] "RemoveContainer" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: E0130 13:24:07.217393 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": container with ID starting with 2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6 not found: ID does not exist" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.217411 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6"} err="failed to get container status \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": rpc error: code = NotFound desc = could not find container \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": container with ID starting with 2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6 not found: ID does not exist" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.423116 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.432398 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.496218 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:07 crc kubenswrapper[5039]: W0130 13:24:07.499230 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d494262_b4a1_4e79_9443_57d9d91b3171.slice/crio-cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2 WatchSource:0}: Error finding container cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2: Status 404 returned error can't find the container with id cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2 Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.742113 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.742182 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.088209 5039 generic.go:334] "Generic (PLEG): container finished" podID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerID="eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1" exitCode=0 Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.088287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerDied","Data":"eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090851 5039 generic.go:334] "Generic (PLEG): container finished" podID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" exitCode=0 Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090884 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerStarted","Data":"cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.121911 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26283c79-2aa3-464b-b265-4650000a980b" path="/var/lib/kubelet/pods/26283c79-2aa3-464b-b265-4650000a980b/volumes" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.101781 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerStarted","Data":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.101851 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.124529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" podStartSLOduration=3.124505902 podStartE2EDuration="3.124505902s" podCreationTimestamp="2026-01-30 13:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:09.121148752 +0000 UTC m=+1213.781830019" watchObservedRunningTime="2026-01-30 13:24:09.124505902 +0000 UTC m=+1213.785187149" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.412898 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.560974 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.561119 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.561184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.578954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9" (OuterVolumeSpecName: "kube-api-access-kfqj9") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "kube-api-access-kfqj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.591445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.608961 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data" (OuterVolumeSpecName: "config-data") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663595 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663630 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerDied","Data":"7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a"} Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111603 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111609 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.327245 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.365936 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366381 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="init" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366406 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="init" Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366424 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366457 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366466 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366660 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366693 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.367739 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375474 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375546 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375581 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.377573 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381359 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381520 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381689 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.391868 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.447087 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482102 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482221 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482247 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482283 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482334 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482382 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482403 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482472 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.485961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.486687 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.487476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.491289 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.507477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.522033 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.562606 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.564058 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.565822 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.572940 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.573194 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-slqjz" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.580713 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583808 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583837 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584580 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584620 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584646 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584741 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584842 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595496 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.597113 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.600212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.641538 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.644864 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.649063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.652491 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.652659 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.686500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.692992 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693087 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693141 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693161 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693220 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693303 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693320 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693345 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.715935 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.719906 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.719982 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.730405 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.735752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.738580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.749559 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.757134 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.758286 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.760501 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.769922 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.771438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fjxzp" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.778592 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794439 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794476 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795022 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795524 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.799943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809427 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809734 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809988 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.812344 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.861054 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.879092 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.887097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.896129 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.920228 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.920380 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.930259 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.951577 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.957583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.963525 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.971177 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.971533 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9npv4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.976734 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.978578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.997022 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.018205 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.020104 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032456 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032591 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.034643 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.048817 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.056701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-swggc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065598 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.071270 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.084226 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.092773 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139232 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139271 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139350 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139382 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139492 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139516 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139548 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139643 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.140179 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" containerID="cri-o://19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" gracePeriod=10 Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.145552 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.146858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.164186 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.243997 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244064 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244265 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244330 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244361 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244386 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244412 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244442 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244491 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.245652 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.245958 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.246266 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.248522 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.249318 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250334 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.266742 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.268293 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.274750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.276677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.302617 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: W0130 13:24:11.310377 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cb0a44d_379c_45ab_83bd_5a33b472d52c.slice/crio-3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d WatchSource:0}: Error finding container 3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d: Status 404 returned error can't find the container with id 3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.393929 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.427209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.511627 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.514309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517201 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517199 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517957 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.522400 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.614569 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.624673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.627464 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.628989 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.648005 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.663882 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.664054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.664184 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.684726 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.684993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685134 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685325 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.686616 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.718335 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790357 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790400 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790439 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790463 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790479 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790503 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790563 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790578 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790660 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790691 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790709 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.791630 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.793617 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.793951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.794712 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.801325 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.802801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.809405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.809864 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.869093 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.894929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895296 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895380 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895514 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.897665 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.898118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.898861 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.899242 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.902604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.905000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.914979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.917087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.922990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.923124 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.942914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.950570 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.962335 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.089938 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100231 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100282 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100704 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100794 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.116076 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5" (OuterVolumeSpecName: "kube-api-access-n95m5") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "kube-api-access-n95m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.121787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.126248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.169501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerStarted","Data":"ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.170938 5039 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerID="62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e" exitCode=0 Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.170995 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerDied","Data":"62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.171098 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerStarted","Data":"3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.178943 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerStarted","Data":"e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.180747 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"f727d9eb39628ea5d3bfc94a0f16b684d39aab6c4c5b91405196bd7c1c2c942f"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.181643 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerStarted","Data":"60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.182779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerStarted","Data":"8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.182800 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerStarted","Data":"fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196596 5039 generic.go:334] "Generic (PLEG): container finished" podID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" exitCode=0 Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196676 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196697 5039 scope.go:117] "RemoveContainer" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196820 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.205208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.206640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.233373 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.233906 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x8hs4" podStartSLOduration=2.2338908379999998 podStartE2EDuration="2.233890838s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:12.210891092 +0000 UTC m=+1216.871572319" watchObservedRunningTime="2026-01-30 13:24:12.233890838 +0000 UTC m=+1216.894572065" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.270986 5039 scope.go:117] "RemoveContainer" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.306847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config" (OuterVolumeSpecName: "config") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.310360 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.310731 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.318552 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.320973 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.326320 5039 scope.go:117] "RemoveContainer" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: E0130 13:24:12.329741 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": container with ID starting with 19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716 not found: ID does not exist" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.329791 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} err="failed to get container status \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": rpc error: code = NotFound desc = could not find container \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": container with ID starting with 19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716 not found: ID does not exist" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.329826 5039 scope.go:117] "RemoveContainer" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: E0130 13:24:12.332122 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": container with ID starting with 1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c not found: ID does not exist" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.332425 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c"} err="failed to get container status \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": rpc error: code = NotFound desc = could not find container \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": container with ID starting with 1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c not found: ID does not exist" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.352503 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412622 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412878 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412889 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412898 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.576569 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.612217 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.745037 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.857408 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934761 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934866 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934922 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.935047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.935099 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.960719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.963159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx" (OuterVolumeSpecName: "kube-api-access-cvqcx") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "kube-api-access-cvqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.969587 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.975895 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.014371 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config" (OuterVolumeSpecName: "config") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.019481 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045617 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045647 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045668 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045703 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045712 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.084938 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.213357 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.255450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerStarted","Data":"047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.266117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerStarted","Data":"199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298343 5039 generic.go:334] "Generic (PLEG): container finished" podID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerID="533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255" exitCode=0 Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298432 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298457 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerStarted","Data":"1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.309198 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.333827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335076 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9z97g" podStartSLOduration=3.335059109 podStartE2EDuration="3.335059109s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:13.293608219 +0000 UTC m=+1217.954289446" watchObservedRunningTime="2026-01-30 13:24:13.335059109 +0000 UTC m=+1217.995740356" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerDied","Data":"3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335193 5039 scope.go:117] "RemoveContainer" containerID="62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335330 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.351289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"f3eabd46935257bf1bd7431973597f292ffc42c9f31ea820c46cd46cd443585a"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.361474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"780ed4a7b9d23457a9c4f465014afbb4f41ddb2155c54b3ab23b1e2a436875c3"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.470760 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.488039 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.105684 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" path="/var/lib/kubelet/pods/4cb0a44d-379c-45ab-83bd-5a33b472d52c/volumes" Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.106408 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" path="/var/lib/kubelet/pods/7d494262-b4a1-4e79-9443-57d9d91b3171/volumes" Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.388567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.391544 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.395530 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerStarted","Data":"2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.395572 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.408442 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399"} Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.409056 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" containerID="cri-o://11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.409700 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" containerID="cri-o://bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.412612 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" containerID="cri-o://6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.413499 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" containerID="cri-o://67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.413588 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60"} Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.448594 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.448578292 podStartE2EDuration="5.448578292s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:15.447432421 +0000 UTC m=+1220.108113658" watchObservedRunningTime="2026-01-30 13:24:15.448578292 +0000 UTC m=+1220.109259529" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.460024 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" podStartSLOduration=5.459988888 podStartE2EDuration="5.459988888s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:14.43063261 +0000 UTC m=+1219.091313837" watchObservedRunningTime="2026-01-30 13:24:15.459988888 +0000 UTC m=+1220.120670115" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.478026 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.47799541 podStartE2EDuration="5.47799541s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:15.468624839 +0000 UTC m=+1220.129306086" watchObservedRunningTime="2026-01-30 13:24:15.47799541 +0000 UTC m=+1220.138676637" Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425659 5039 generic.go:334] "Generic (PLEG): container finished" podID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerID="bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" exitCode=0 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425999 5039 generic.go:334] "Generic (PLEG): container finished" podID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerID="11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" exitCode=143 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.426145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428897 5039 generic.go:334] "Generic (PLEG): container finished" podID="5560786d-b81f-4c0f-af44-7be5778edf14" containerID="67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" exitCode=0 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428915 5039 generic.go:334] "Generic (PLEG): container finished" podID="5560786d-b81f-4c0f-af44-7be5778edf14" containerID="6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" exitCode=143 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428917 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936"} Jan 30 13:24:17 crc kubenswrapper[5039]: I0130 13:24:17.439568 5039 generic.go:334] "Generic (PLEG): container finished" podID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerID="8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3" exitCode=0 Jan 30 13:24:17 crc kubenswrapper[5039]: I0130 13:24:17.439665 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerDied","Data":"8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3"} Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.396178 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.455577 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.455805 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" containerID="cri-o://d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" gracePeriod=10 Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.501741 5039 generic.go:334] "Generic (PLEG): container finished" podID="46226e88-9d62-4d6f-a009-ed620de5e723" containerID="d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" exitCode=0 Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.501923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec"} Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.778936 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.858946 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859173 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859222 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859279 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.866073 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts" (OuterVolumeSpecName: "scripts") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.868185 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t" (OuterVolumeSpecName: "kube-api-access-tqt5t") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "kube-api-access-tqt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.871296 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.879320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.891330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data" (OuterVolumeSpecName: "config-data") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.895437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961854 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961885 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961895 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961906 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961915 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961923 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.509860 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerDied","Data":"fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7"} Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.510597 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.510002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.631398 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d39ae4_14ac_434e_b720_6efdaee26538.slice/crio-fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d39ae4_14ac_434e_b720_6efdaee26538.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.862903 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.869499 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.978649 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979091 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979112 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979128 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979137 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979158 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979167 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979192 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979201 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979406 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979425 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979437 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.980146 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.984164 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.985400 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.987823 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.988111 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.988314 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.989590 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.080892 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.080952 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081037 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081072 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081108 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081261 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.103824 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" path="/var/lib/kubelet/pods/f1d39ae4-14ac-434e-b720-6efdaee26538/volumes" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182783 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182802 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.189837 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.190176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.195788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.199238 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.199821 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.205540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.304285 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:25 crc kubenswrapper[5039]: I0130 13:24:25.827914 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 13:24:30 crc kubenswrapper[5039]: I0130 13:24:30.827358 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.113202 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.113326 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mrkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-c2z79_openstack(1c26816b-0634-4cb2-9356-3affc33c0698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.114478 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-c2z79" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.141474 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.148846 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.311921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.311977 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312096 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312123 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312176 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312252 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312307 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312393 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312442 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312503 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312526 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312655 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs" (OuterVolumeSpecName: "logs") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313097 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313383 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs" (OuterVolumeSpecName: "logs") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313435 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313475 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.314119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.319096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26" (OuterVolumeSpecName: "kube-api-access-48z26") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "kube-api-access-48z26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.319223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.321531 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts" (OuterVolumeSpecName: "scripts") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.322735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t" (OuterVolumeSpecName: "kube-api-access-v845t") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "kube-api-access-v845t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.343325 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts" (OuterVolumeSpecName: "scripts") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.359652 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.370470 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.393435 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.395298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415212 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415262 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415272 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415281 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415290 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415302 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415314 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415323 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415331 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415340 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415348 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.416751 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.425898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data" (OuterVolumeSpecName: "config-data") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.429092 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data" (OuterVolumeSpecName: "config-data") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.433325 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.443416 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517171 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517211 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517222 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517235 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517245 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"780ed4a7b9d23457a9c4f465014afbb4f41ddb2155c54b3ab23b1e2a436875c3"} Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587576 5039 scope.go:117] "RemoveContainer" containerID="67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587454 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.593778 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.593779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"f3eabd46935257bf1bd7431973597f292ffc42c9f31ea820c46cd46cd443585a"} Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.595598 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-c2z79" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.639002 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.647788 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.669971 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.695754 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.705583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706473 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706494 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706520 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706529 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706540 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706547 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706556 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706562 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706776 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706790 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706832 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.709851 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720258 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720371 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720612 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720629 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720881 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.721946 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.723876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.724122 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.729158 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.737503 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824989 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825025 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825216 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825347 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927039 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927102 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927167 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927225 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927290 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927316 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927343 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927391 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927473 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927542 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927565 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927642 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927652 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.928659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.928791 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.930914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.934193 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.938905 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.943545 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.948799 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.962843 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029047 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029099 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029237 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029793 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.030702 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.030910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.033067 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.034175 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.045588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.045986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.047058 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.067117 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.089934 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.108781 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" path="/var/lib/kubelet/pods/5560786d-b81f-4c0f-af44-7be5778edf14/volumes" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.109774 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" path="/var/lib/kubelet/pods/edf39eff-2de4-43c3-a36a-bc589bd232b6/volumes" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.357299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.640742 5039 scope.go:117] "RemoveContainer" containerID="6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.774256 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.804048 5039 scope.go:117] "RemoveContainer" containerID="bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861281 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861322 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861351 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.867692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq" (OuterVolumeSpecName: "kube-api-access-hxjgq") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "kube-api-access-hxjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.869895 5039 scope.go:117] "RemoveContainer" containerID="11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.922819 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.937582 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config" (OuterVolumeSpecName: "config") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.962656 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964123 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964143 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964154 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964164 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.966075 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.065667 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.162295 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.175115 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8475d70_6235_43b5_9a15_b4a8bfbab19d.slice/crio-15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0 WatchSource:0}: Error finding container 15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0: Status 404 returned error can't find the container with id 15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0 Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.238259 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b7ef7fc_8e87_46f9_8a77_63ac3e662a50.slice/crio-583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819 WatchSource:0}: Error finding container 583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819: Status 404 returned error can't find the container with id 583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819 Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.241474 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.514942 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7 WatchSource:0}: Error finding container 38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7: Status 404 returned error can't find the container with id 38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7 Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.515865 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.625666 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627704 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"e1528364e7751cb7c328a7866fec171c18aae97021ba92ae46488b104ead34c1"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627824 5039 scope.go:117] "RemoveContainer" containerID="d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.629801 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.631792 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerStarted","Data":"15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.663236 5039 scope.go:117] "RemoveContainer" containerID="c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.682791 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.690789 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.962695 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.962834 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqtmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-q8gx7_openstack(5bba3dea-64f4-479f-b7f1-99c718d7b8af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.963940 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-q8gx7" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.644822 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.665664 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.665719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.671595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.673505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerStarted","Data":"f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.692188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerStarted","Data":"bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb"} Jan 30 13:24:35 crc kubenswrapper[5039]: E0130 13:24:35.703342 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-q8gx7" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.730431 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.730405865 podStartE2EDuration="4.730405865s" podCreationTimestamp="2026-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:35.69288906 +0000 UTC m=+1240.353570297" watchObservedRunningTime="2026-01-30 13:24:35.730405865 +0000 UTC m=+1240.391087102" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.758002 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bf848" podStartSLOduration=12.757981593 podStartE2EDuration="12.757981593s" podCreationTimestamp="2026-01-30 13:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:35.753495623 +0000 UTC m=+1240.414176850" watchObservedRunningTime="2026-01-30 13:24:35.757981593 +0000 UTC m=+1240.418662820" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.807464 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w2l48" podStartSLOduration=7.038898269 podStartE2EDuration="25.807441467s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:12.271748831 +0000 UTC m=+1216.932430058" lastFinishedPulling="2026-01-30 13:24:31.040292029 +0000 UTC m=+1235.700973256" observedRunningTime="2026-01-30 13:24:35.7758048 +0000 UTC m=+1240.436486037" watchObservedRunningTime="2026-01-30 13:24:35.807441467 +0000 UTC m=+1240.468122694" Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.107194 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" path="/var/lib/kubelet/pods/46226e88-9d62-4d6f-a009-ed620de5e723/volumes" Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.703302 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089"} Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.751799 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.751777119 podStartE2EDuration="5.751777119s" podCreationTimestamp="2026-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:36.740110436 +0000 UTC m=+1241.400791693" watchObservedRunningTime="2026-01-30 13:24:36.751777119 +0000 UTC m=+1241.412458346" Jan 30 13:24:37 crc kubenswrapper[5039]: I0130 13:24:37.742118 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:24:37 crc kubenswrapper[5039]: I0130 13:24:37.742164 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.046849 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.047560 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.087798 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.114248 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.358392 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.358455 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.410452 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.412144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.763768 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9"} Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764372 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764414 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764433 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:43 crc kubenswrapper[5039]: I0130 13:24:43.775853 5039 generic.go:334] "Generic (PLEG): container finished" podID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerID="f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7" exitCode=0 Jan 30 13:24:43 crc kubenswrapper[5039]: I0130 13:24:43.775916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerDied","Data":"f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7"} Jan 30 13:24:44 crc kubenswrapper[5039]: I0130 13:24:44.687134 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:44 crc kubenswrapper[5039]: I0130 13:24:44.689094 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.046750 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.047309 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.048689 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.769989 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerDied","Data":"15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0"} Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817527 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817563 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860494 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860591 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860618 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860680 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.867477 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts" (OuterVolumeSpecName: "scripts") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.870650 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.872233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.873231 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk" (OuterVolumeSpecName: "kube-api-access-hzkgk") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "kube-api-access-hzkgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.897245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data" (OuterVolumeSpecName: "config-data") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.900162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963273 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963313 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963326 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963340 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963353 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963365 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.885432 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886177 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886195 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886221 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="init" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886231 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="init" Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886265 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886274 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886487 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886519 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.887161 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.889452 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.889965 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.890066 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.890240 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.892936 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.900690 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.902873 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979783 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979863 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980113 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980200 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980269 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980369 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081787 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081877 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081973 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.082002 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.082221 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.086882 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087091 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087122 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.088045 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.088129 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.101578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.211971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.272627 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:52 crc kubenswrapper[5039]: W0130 13:24:52.281440 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ae3d16_d381_4891_901f_e2d07d3a7720.slice/crio-fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651 WatchSource:0}: Error finding container fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651: Status 404 returned error can't find the container with id fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651 Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerStarted","Data":"fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerStarted","Data":"fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.885916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerStarted","Data":"e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.887746 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerStarted","Data":"50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.889753 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.891379 5039 generic.go:334] "Generic (PLEG): container finished" podID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerID="bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb" exitCode=0 Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.891422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerDied","Data":"bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.915766 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7467d89c49-kfwss" podStartSLOduration=5.915745355 podStartE2EDuration="5.915745355s" podCreationTimestamp="2026-01-30 13:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:52.907934326 +0000 UTC m=+1257.568615573" watchObservedRunningTime="2026-01-30 13:24:52.915745355 +0000 UTC m=+1257.576426592" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.935657 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-q8gx7" podStartSLOduration=2.677065382 podStartE2EDuration="42.935634428s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:11.741737071 +0000 UTC m=+1216.402418298" lastFinishedPulling="2026-01-30 13:24:52.000306097 +0000 UTC m=+1256.660987344" observedRunningTime="2026-01-30 13:24:52.931933859 +0000 UTC m=+1257.592615096" watchObservedRunningTime="2026-01-30 13:24:52.935634428 +0000 UTC m=+1257.596315655" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.971051 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-c2z79" podStartSLOduration=3.30859969 podStartE2EDuration="42.971033055s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:12.126062701 +0000 UTC m=+1216.786743928" lastFinishedPulling="2026-01-30 13:24:51.788496066 +0000 UTC m=+1256.449177293" observedRunningTime="2026-01-30 13:24:52.968805646 +0000 UTC m=+1257.629486873" watchObservedRunningTime="2026-01-30 13:24:52.971033055 +0000 UTC m=+1257.631714272" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.277541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398801 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398923 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398994 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399067 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs" (OuterVolumeSpecName: "logs") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399795 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.404313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts" (OuterVolumeSpecName: "scripts") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.404367 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p" (OuterVolumeSpecName: "kube-api-access-5787p") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "kube-api-access-5787p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.428424 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.429521 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data" (OuterVolumeSpecName: "config-data") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501042 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501081 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501091 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501100 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.917858 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerDied","Data":"047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269"} Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.918348 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.918441 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129171 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: E0130 13:24:55.129494 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129510 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129687 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.130505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.135960 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136089 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-swggc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136217 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136230 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136860 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.185546 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220101 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220929 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221105 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221150 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323240 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323349 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323461 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.327757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.328189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.328261 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.329110 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.329615 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.343795 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.490848 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.946210 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: W0130 13:24:55.960757 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod498ddd50_96b8_491c_92e9_8c98bc7fa123.slice/crio-10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43 WatchSource:0}: Error finding container 10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43: Status 404 returned error can't find the container with id 10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43 Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.943742 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944728 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944744 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944766 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.971176 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68f47564b6-tbx7d" podStartSLOduration=1.971157028 podStartE2EDuration="1.971157028s" podCreationTimestamp="2026-01-30 13:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:56.962532427 +0000 UTC m=+1261.623213664" watchObservedRunningTime="2026-01-30 13:24:56.971157028 +0000 UTC m=+1261.631838255" Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.961896 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c26816b-0634-4cb2-9356-3affc33c0698" containerID="50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3" exitCode=0 Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.961976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerDied","Data":"50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3"} Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.962224 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.719644 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801179 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801286 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.809946 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt" (OuterVolumeSpecName: "kube-api-access-6mrkt") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "kube-api-access-6mrkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.812685 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.838576 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902815 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902848 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902857 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.995152 5039 generic.go:334] "Generic (PLEG): container finished" podID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerID="e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629" exitCode=0 Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.995230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerDied","Data":"e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629"} Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.997725 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerDied","Data":"e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532"} Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.997754 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.998445 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263070 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:00 crc kubenswrapper[5039]: E0130 13:25:00.263507 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263526 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263732 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.264741 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.267244 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.278407 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.278679 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.279251 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9npv4" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.304847 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.306198 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308295 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308699 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308759 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308813 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.347345 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.376004 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.378202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.389830 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.409991 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410061 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410085 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410104 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410125 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410306 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410326 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410418 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410433 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.417407 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.421048 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.421588 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.423975 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.450688 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513416 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513562 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514038 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514077 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.515809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.516612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.517229 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.517245 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.518049 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.526602 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.528699 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.535198 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.541124 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.551143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.565072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.570565 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.574712 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.582614 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.601485 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616820 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616847 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616961 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616981 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.624377 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.694953 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718456 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718585 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718611 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718632 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.721974 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.722053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.737606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.737922 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.908916 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.218710 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.221058 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.223506 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.223986 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.228696 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.280952 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.280996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281491 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281575 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281652 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382815 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382888 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382936 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382996 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.389753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.390699 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.391898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.391993 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.392273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.412720 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.555048 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.648856 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699245 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699293 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699332 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699427 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.705526 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.708855 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts" (OuterVolumeSpecName: "scripts") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.708912 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh" (OuterVolumeSpecName: "kube-api-access-zqtmh") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "kube-api-access-zqtmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.711719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.736622 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.784174 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data" (OuterVolumeSpecName: "config-data") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801083 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801120 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801128 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801137 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801145 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801153 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: W0130 13:25:03.954324 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1eb67cc_f1f4_4a29_94ce_ec7e196074a6.slice/crio-fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124 WatchSource:0}: Error finding container fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124: Status 404 returned error can't find the container with id fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124 Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.957822 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.044497 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.045615 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.045788 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" containerID="cri-o://12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046089 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046339 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" containerID="cri-o://de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046400 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" containerID="cri-o://ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046437 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" containerID="cri-o://6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.051167 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2081f65c_c5b5_4486_bdb3_49acf4f9ae46.slice/crio-a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1 WatchSource:0}: Error finding container a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1: Status 404 returned error can't find the container with id a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.052997 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerDied","Data":"ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.053054 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.053114 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.060721 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerStarted","Data":"fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.098037 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.283504756 podStartE2EDuration="54.09800217s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:11.932838378 +0000 UTC m=+1216.593519605" lastFinishedPulling="2026-01-30 13:25:03.747335792 +0000 UTC m=+1268.408017019" observedRunningTime="2026-01-30 13:25:04.065718456 +0000 UTC m=+1268.726399703" watchObservedRunningTime="2026-01-30 13:25:04.09800217 +0000 UTC m=+1268.758683397" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.164121 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.164657 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48be0b7f_4cb1_4c00_851a_7078ed9ccab0.slice/crio-9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b WatchSource:0}: Error finding container 9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b: Status 404 returned error can't find the container with id 9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.165542 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dddd2ab_85b5_4431_a111_dbb5ebff91d9.slice/crio-74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c WatchSource:0}: Error finding container 74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c: Status 404 returned error can't find the container with id 74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.174956 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.285618 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2125aae4_cb1a_4329_ba0a_68cc3661427b.slice/crio-bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943 WatchSource:0}: Error finding container bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943: Status 404 returned error can't find the container with id bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.285725 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:04 crc kubenswrapper[5039]: E0130 13:25:04.644323 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53390b3b_ff7d_4f71_8599_b1deebe3facf.slice/crio-conmon-12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935071 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:04 crc kubenswrapper[5039]: E0130 13:25:04.935715 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935885 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.942690 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.949990 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.951473 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.958635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-slqjz" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.005357 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.039538 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.054241 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.117221 5039 generic.go:334] "Generic (PLEG): container finished" podID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerID="a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020" exitCode=0 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.117285 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerDied","Data":"a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.136794 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.138543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.143244 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.158853 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.157947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159318 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159326 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159610 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.161568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.161714 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.163212 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.182641 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195586 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.196507 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.196535 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225060 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" exitCode=2 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225089 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" exitCode=0 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225128 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225152 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.226111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.251758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267132 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267245 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267411 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267507 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267541 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267651 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.271606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.283834 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.284981 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.286576 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.296085 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.297527 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.301767 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.319914 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.320118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374317 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374409 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374515 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374644 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374678 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.395939 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.396749 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.410940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.413853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.420157 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-554596898b-g5nlm" podStartSLOduration=5.420130306 podStartE2EDuration="5.420130306s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:05.202593973 +0000 UTC m=+1269.863275210" watchObservedRunningTime="2026-01-30 13:25:05.420130306 +0000 UTC m=+1270.080811553" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.421811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.424691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.428831 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.441555 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d68bccdc4-krd48" podStartSLOduration=2.4415319589999998 podStartE2EDuration="2.441531959s" podCreationTimestamp="2026-01-30 13:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:05.240462737 +0000 UTC m=+1269.901143964" watchObservedRunningTime="2026-01-30 13:25:05.441531959 +0000 UTC m=+1270.102213196" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.475970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476127 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476235 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476332 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.477073 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.482108 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.484508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.488179 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.496478 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.497727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.497769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.552693 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.582531 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.651914 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.707183 5039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 13:25:05 crc kubenswrapper[5039]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 13:25:05 crc kubenswrapper[5039]: > podSandboxID="fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.707617 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:25:05 crc kubenswrapper[5039]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574hbch97h666hbbh5fch555h5ddh649h699hf4h9ch6h699h55h5b7h5b9h5d5hf6h686h5cfh599h594h559h645h699h55h5f8h54ch555h55bh655q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c67bffd47-ckw2b_openstack(d1eb67cc-f1f4-4a29-94ce-ec7e196074a6): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 13:25:05 crc kubenswrapper[5039]: > logger="UnhandledError" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.709131 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.052550 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.131507 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.207398 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.262697 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77b835a6_4f17_4e1c_a3cc_847f89116483.slice/crio-8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e WatchSource:0}: Error finding container 8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e: Status 404 returned error can't find the container with id 8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.263154 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f736d4_9056_434a_a2c8_8ffb02d153d8.slice/crio-15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f WatchSource:0}: Error finding container 15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f: Status 404 returned error can't find the container with id 15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.645605 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabcf0e62_e031_45c0_a683_24fe3912193e.slice/crio-b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6 WatchSource:0}: Error finding container b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6: Status 404 returned error can't find the container with id b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.003737 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.120881 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.120948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121043 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121065 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121113 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121146 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.127185 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc" (OuterVolumeSpecName: "kube-api-access-4gwlc") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "kube-api-access-4gwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.199431 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.208419 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.223993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config" (OuterVolumeSpecName: "config") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.225847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228734 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228763 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228775 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228786 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228795 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.230118 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.260076 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.261965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.273814 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.273868 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.277745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.284994 5039 generic.go:334] "Generic (PLEG): container finished" podID="326188c4-7523-49b7-9790-063f3f18988d" containerID="199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.285072 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerDied","Data":"199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.286489 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.289887 5039 generic.go:334] "Generic (PLEG): container finished" podID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.289978 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.290053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerStarted","Data":"15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303029 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303326 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerDied","Data":"fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303363 5039 scope.go:117] "RemoveContainer" containerID="a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.331538 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.387956 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.396251 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.742743 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.743185 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.743235 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.744135 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.744191 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" gracePeriod=600 Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.120297 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" path="/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volumes" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.319560 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.321272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerStarted","Data":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.321873 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.327307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.328685 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336032 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" exitCode=0 Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336135 5039 scope.go:117] "RemoveContainer" containerID="2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.340177 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.372443 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" podStartSLOduration=4.372411117 podStartE2EDuration="4.372411117s" podCreationTimestamp="2026-01-30 13:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:08.364468844 +0000 UTC m=+1273.025150071" watchObservedRunningTime="2026-01-30 13:25:08.372411117 +0000 UTC m=+1273.033092344" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.414879 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7df987bf59-vgqrf" podStartSLOduration=5.819603072 podStartE2EDuration="8.414861783s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="2026-01-30 13:25:04.167498351 +0000 UTC m=+1268.828179578" lastFinishedPulling="2026-01-30 13:25:06.762757062 +0000 UTC m=+1271.423438289" observedRunningTime="2026-01-30 13:25:08.411464682 +0000 UTC m=+1273.072145909" watchObservedRunningTime="2026-01-30 13:25:08.414861783 +0000 UTC m=+1273.075543010" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.453069 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.454362 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podStartSLOduration=5.746384582 podStartE2EDuration="8.45434477s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="2026-01-30 13:25:04.054795604 +0000 UTC m=+1268.715476831" lastFinishedPulling="2026-01-30 13:25:06.762755792 +0000 UTC m=+1271.423437019" observedRunningTime="2026-01-30 13:25:08.440181111 +0000 UTC m=+1273.100862338" watchObservedRunningTime="2026-01-30 13:25:08.45434477 +0000 UTC m=+1273.115025997" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.810268 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876375 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.893876 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b" (OuterVolumeSpecName: "kube-api-access-8h45b") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "kube-api-access-8h45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.941735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config" (OuterVolumeSpecName: "config") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.953299 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979918 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979956 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979972 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerDied","Data":"60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362437 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.365434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.366566 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.369549 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.419460 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.419436607 podStartE2EDuration="4.419436607s" podCreationTimestamp="2026-01-30 13:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:09.407077976 +0000 UTC m=+1274.067759223" watchObservedRunningTime="2026-01-30 13:25:09.419436607 +0000 UTC m=+1274.080117834" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.482637 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.591272475 podStartE2EDuration="5.482615998s" podCreationTimestamp="2026-01-30 13:25:04 +0000 UTC" firstStartedPulling="2026-01-30 13:25:06.267514113 +0000 UTC m=+1270.928195340" lastFinishedPulling="2026-01-30 13:25:07.158857636 +0000 UTC m=+1271.819538863" observedRunningTime="2026-01-30 13:25:09.468387048 +0000 UTC m=+1274.129068275" watchObservedRunningTime="2026-01-30 13:25:09.482615998 +0000 UTC m=+1274.143297225" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.546236 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.597259 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:09 crc kubenswrapper[5039]: E0130 13:25:09.598052 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598070 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: E0130 13:25:09.598099 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598105 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598266 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598291 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.599294 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.627113 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.628669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.638878 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639285 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639354 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fjxzp" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639373 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639523 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.651934 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697829 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697934 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698124 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698205 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698228 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698242 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799489 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799640 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799736 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799800 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799865 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799902 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.802961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803866 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.804638 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.805986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.812126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.815936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.817705 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.817860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.821560 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.942496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.966454 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383538 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" containerID="cri-o://28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" gracePeriod=10 Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383613 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" containerID="cri-o://30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" gracePeriod=30 Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383742 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" containerID="cri-o://c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" gracePeriod=30 Jan 30 13:25:10 crc kubenswrapper[5039]: W0130 13:25:10.575627 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c796c5f_b2e9_4a42_af9c_14b03c99d213.slice/crio-672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb WatchSource:0}: Error finding container 672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb: Status 404 returned error can't find the container with id 672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.580401 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.583065 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.702731 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.148855 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232345 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232513 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232667 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232712 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.262443 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz" (OuterVolumeSpecName: "kube-api-access-rlfrz") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "kube-api-access-rlfrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.336308 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406882 5039 generic.go:334] "Generic (PLEG): container finished" podID="abcf0e62-e031-45c0-a683-24fe3912193e" containerID="c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406916 5039 generic.go:334] "Generic (PLEG): container finished" podID="abcf0e62-e031-45c0-a683-24fe3912193e" containerID="30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" exitCode=143 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.407023 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441467 5039 generic.go:334] "Generic (PLEG): container finished" podID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441639 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441651 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441671 5039 scope.go:117] "RemoveContainer" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.451056 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"57c4193e105db2951823832bbd2267125caa477cceaaea4fe9af929c3b05c7a4"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.453732 5039 generic.go:334] "Generic (PLEG): container finished" podID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerID="7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.455482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.455527 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerStarted","Data":"672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.659532 5039 scope.go:117] "RemoveContainer" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.628062 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.708276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.739869 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.742736 5039 scope.go:117] "RemoveContainer" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: E0130 13:25:11.745478 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": container with ID starting with 28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab not found: ID does not exist" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.745522 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} err="failed to get container status \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": rpc error: code = NotFound desc = could not find container \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": container with ID starting with 28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab not found: ID does not exist" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.745550 5039 scope.go:117] "RemoveContainer" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: E0130 13:25:11.746380 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": container with ID starting with 202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee not found: ID does not exist" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.746402 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee"} err="failed to get container status \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": rpc error: code = NotFound desc = could not find container \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": container with ID starting with 202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee not found: ID does not exist" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766621 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766735 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766782 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766889 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766954 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767042 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767505 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767531 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.772345 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs" (OuterVolumeSpecName: "logs") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.775206 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.781218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt" (OuterVolumeSpecName: "kube-api-access-h5cgt") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "kube-api-access-h5cgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.784418 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts" (OuterVolumeSpecName: "scripts") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.788169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.806890 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.853130 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.862545 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config" (OuterVolumeSpecName: "config") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871468 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871495 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871503 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871516 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871527 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871538 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871548 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871560 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871760 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.929160 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data" (OuterVolumeSpecName: "config-data") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.973313 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.973356 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.078525 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.091067 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.108375 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" path="/var/lib/kubelet/pods/d6f736d4-9056-434a-a2c8-8ffb02d153d8/volumes" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464569 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464863 5039 scope.go:117] "RemoveContainer" containerID="c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468095 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468376 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.469732 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerStarted","Data":"c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.470493 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.488264 5039 scope.go:117] "RemoveContainer" containerID="30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.502721 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.506298 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.528852 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529237 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529253 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529283 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529289 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529300 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529306 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529320 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="init" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529326 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="init" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529478 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529504 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529515 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.530394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534268 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534483 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.536040 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8654cc59b8-vwcl9" podStartSLOduration=3.535997255 podStartE2EDuration="3.535997255s" podCreationTimestamp="2026-01-30 13:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:12.50742451 +0000 UTC m=+1277.168105737" watchObservedRunningTime="2026-01-30 13:25:12.535997255 +0000 UTC m=+1277.196678482" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.548683 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podStartSLOduration=3.548666924 podStartE2EDuration="3.548666924s" podCreationTimestamp="2026-01-30 13:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:12.53432309 +0000 UTC m=+1277.195004317" watchObservedRunningTime="2026-01-30 13:25:12.548666924 +0000 UTC m=+1277.209348151" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.550183 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.582962 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583149 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583204 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583294 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583317 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583357 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.684753 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685108 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685128 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685149 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685205 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685196 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685234 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685301 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685328 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685881 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.693466 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.694080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.696313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.697840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.721935 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.722562 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.723840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.863514 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.035548 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.047777 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.365991 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.494777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"690883ae8a994ffd96caf77a50054a169cab6a25a2f983c92bfa6a0937104bb5"} Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.739071 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.741576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.744443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.750375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.786066 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828160 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828222 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828518 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930363 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930447 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930476 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930565 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.964138 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.967996 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.969681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.969992 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.971664 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.972126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.972243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.082901 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.107667 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" path="/var/lib/kubelet/pods/abcf0e62-e031-45c0-a683-24fe3912193e/volumes" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.519107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.776248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.471116 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530793 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530833 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"68ca238552f48a2278287e46aa748e56a5416468365b8a491b7c39c3f968cdf3"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530862 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.532763 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.533645 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.574181 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.588412 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5883950049999997 podStartE2EDuration="3.588395005s" podCreationTimestamp="2026-01-30 13:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:15.580993457 +0000 UTC m=+1280.241674684" watchObservedRunningTime="2026-01-30 13:25:15.588395005 +0000 UTC m=+1280.249076232" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.592328 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75df786d6f-7k65j" podStartSLOduration=2.5923122100000002 podStartE2EDuration="2.59231221s" podCreationTimestamp="2026-01-30 13:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:15.561628908 +0000 UTC m=+1280.222310155" watchObservedRunningTime="2026-01-30 13:25:15.59231221 +0000 UTC m=+1280.252993437" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648253 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648472 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" containerID="cri-o://fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" gracePeriod=30 Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648584 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" containerID="cri-o://29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" gracePeriod=30 Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.828129 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.879471 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.545684 5039 generic.go:334] "Generic (PLEG): container finished" podID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" exitCode=143 Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546188 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" containerID="cri-o://48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" gracePeriod=30 Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546811 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" containerID="cri-o://d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" gracePeriod=30 Jan 30 13:25:17 crc kubenswrapper[5039]: I0130 13:25:17.561906 5039 generic.go:334] "Generic (PLEG): container finished" podID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerID="d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" exitCode=0 Jan 30 13:25:17 crc kubenswrapper[5039]: I0130 13:25:17.561991 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912"} Jan 30 13:25:18 crc kubenswrapper[5039]: I0130 13:25:18.809123 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:33548->10.217.0.155:9311: read: connection reset by peer" Jan 30 13:25:18 crc kubenswrapper[5039]: I0130 13:25:18.809153 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:33546->10.217.0.155:9311: read: connection reset by peer" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.242083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359185 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359271 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359535 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs" (OuterVolumeSpecName: "logs") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.360517 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.360577 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.361107 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.366589 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.376063 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85" (OuterVolumeSpecName: "kube-api-access-lxf85") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "kube-api-access-lxf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.396158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.414392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data" (OuterVolumeSpecName: "config-data") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463053 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463103 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463115 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463127 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583122 5039 generic.go:334] "Generic (PLEG): container finished" podID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" exitCode=0 Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583187 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583208 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c"} Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583230 5039 scope.go:117] "RemoveContainer" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.637730 5039 scope.go:117] "RemoveContainer" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.640922 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.649050 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657075 5039 scope.go:117] "RemoveContainer" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: E0130 13:25:19.657547 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": container with ID starting with 29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e not found: ID does not exist" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657579 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} err="failed to get container status \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": rpc error: code = NotFound desc = could not find container \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": container with ID starting with 29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e not found: ID does not exist" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657608 5039 scope.go:117] "RemoveContainer" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: E0130 13:25:19.658032 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": container with ID starting with fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9 not found: ID does not exist" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.658075 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} err="failed to get container status \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": rpc error: code = NotFound desc = could not find container \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": container with ID starting with fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9 not found: ID does not exist" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.880680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.944277 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.008766 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.009084 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" containerID="cri-o://2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" gracePeriod=10 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.104588 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" path="/var/lib/kubelet/pods/7dddd2ab-85b5-4431-a111-dbb5ebff91d9/volumes" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.595805 5039 generic.go:334] "Generic (PLEG): container finished" podID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerID="2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" exitCode=0 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596198 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.602498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.606194 5039 generic.go:334] "Generic (PLEG): container finished" podID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerID="48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" exitCode=0 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.606231 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691664 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691802 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691901 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.692071 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.692229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.727608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9" (OuterVolumeSpecName: "kube-api-access-brbs9") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "kube-api-access-brbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.754821 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.773652 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config" (OuterVolumeSpecName: "config") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.781151 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.785608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.791117 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795068 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795099 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795113 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795127 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795138 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795152 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.803560 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895845 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895885 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896003 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896697 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.901537 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.902230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts" (OuterVolumeSpecName: "scripts") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.908251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg" (OuterVolumeSpecName: "kube-api-access-hb2xg") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "kube-api-access-hb2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.958910 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.997105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data" (OuterVolumeSpecName: "config-data") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998088 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998117 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998128 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998137 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998148 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998156 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e"} Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617201 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617219 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617493 5039 scope.go:117] "RemoveContainer" containerID="d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.657975 5039 scope.go:117] "RemoveContainer" containerID="48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.664684 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.684239 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.695120 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.718077 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.725786 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726255 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726286 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726295 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726309 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726316 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726328 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="init" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726333 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="init" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726346 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726352 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726365 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726373 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726528 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726546 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726553 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726564 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726571 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.727478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.734289 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.738880 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814261 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814305 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814378 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916032 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916114 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916325 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.921836 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.922185 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.922680 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.932533 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.943050 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.049986 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.108110 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" path="/var/lib/kubelet/pods/77b835a6-4f17-4e1c-a3cc-847f89116483/volumes" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.108827 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" path="/var/lib/kubelet/pods/82817f40-cc0c-40f3-b620-0db4e6db8bd6/volumes" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.508588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.629001 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17"} Jan 30 13:25:23 crc kubenswrapper[5039]: I0130 13:25:23.641382 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88"} Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.021263 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.024490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.026404 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.026522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.028492 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.030001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.053993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054059 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054158 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054521 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054576 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054602 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.155887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.155957 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156155 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156200 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156225 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.158926 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.159622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.166629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.175278 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.176853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.180918 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.182131 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.182681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.374796 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.654846 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b"} Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.681278 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.681257761 podStartE2EDuration="3.681257761s" podCreationTimestamp="2026-01-30 13:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:24.673754601 +0000 UTC m=+1289.334435838" watchObservedRunningTime="2026-01-30 13:25:24.681257761 +0000 UTC m=+1289.341938988" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.849739 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.850816 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856367 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856705 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lkl2h" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.863884 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.890090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974218 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974633 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974779 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.040753 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077266 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077732 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.079771 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.083955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.087468 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.103045 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.171531 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.689245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:25 crc kubenswrapper[5039]: W0130 13:25:25.693311 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod268ed38d_d02d_4539_be5c_f461fde5d02b.slice/crio-3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96 WatchSource:0}: Error finding container 3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96: Status 404 returned error can't find the container with id 3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96 Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696165 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696534 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696545 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.728745 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-757b86cf47-brmgg" podStartSLOduration=2.7287253849999997 podStartE2EDuration="2.728725385s" podCreationTimestamp="2026-01-30 13:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:25.723259468 +0000 UTC m=+1290.383940705" watchObservedRunningTime="2026-01-30 13:25:25.728725385 +0000 UTC m=+1290.389406612" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.894508 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.895562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.905643 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.990253 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.991724 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.004209 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.004272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.029072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.030209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.032451 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.049299 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.061065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105351 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105434 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105472 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105550 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.106602 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.121418 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.122982 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.131034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.176640 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208524 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208605 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208709 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.209665 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.210853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.211374 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.228952 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.230297 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.236650 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.241744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.244217 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.265046 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.305164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313186 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313269 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313376 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.314080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.341500 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.344960 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.415134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.415594 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.416800 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.425347 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.427971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.430428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.442343 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.456194 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.457061 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.459669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.519791 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.519965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.628294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.628438 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.629326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.646321 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.756668 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"268ed38d-d02d-4539-be5c-f461fde5d02b","Type":"ContainerStarted","Data":"3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96"} Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.757407 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.757445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.803518 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.926176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.946111 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.050649 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.105718 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.253074 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.268072 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.282860 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.327610 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.677795 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.779350 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerStarted","Data":"6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.784898 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerStarted","Data":"6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.796898 5039 generic.go:334] "Generic (PLEG): container finished" podID="21db3ccc-3757-44b9-9f63-835f790c4321" containerID="b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce" exitCode=0 Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.797140 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerDied","Data":"b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.797242 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerStarted","Data":"426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.804198 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerStarted","Data":"62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.805357 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerStarted","Data":"c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.805382 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerStarted","Data":"8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.811409 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerStarted","Data":"eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.853120 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-p4jkx" podStartSLOduration=2.853099199 podStartE2EDuration="2.853099199s" podCreationTimestamp="2026-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:27.839207037 +0000 UTC m=+1292.499888264" watchObservedRunningTime="2026-01-30 13:25:27.853099199 +0000 UTC m=+1292.513780426" Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.840474 5039 generic.go:334] "Generic (PLEG): container finished" podID="33369def-50c6-4216-953b-e1848ff3a90a" containerID="a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.840857 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerDied","Data":"a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.843598 5039 generic.go:334] "Generic (PLEG): container finished" podID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerID="bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.843652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerDied","Data":"bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.850699 5039 generic.go:334] "Generic (PLEG): container finished" podID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerID="cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.850960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerDied","Data":"cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.852671 5039 generic.go:334] "Generic (PLEG): container finished" podID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerID="a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.852734 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerDied","Data":"a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.860210 5039 generic.go:334] "Generic (PLEG): container finished" podID="cde91080-bc38-44b5-986f-6712c73de0ec" containerID="c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.860795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerDied","Data":"c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4"} Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.272473 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.427839 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"21db3ccc-3757-44b9-9f63-835f790c4321\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.427960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"21db3ccc-3757-44b9-9f63-835f790c4321\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.429071 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21db3ccc-3757-44b9-9f63-835f790c4321" (UID: "21db3ccc-3757-44b9-9f63-835f790c4321"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.448918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt" (OuterVolumeSpecName: "kube-api-access-kcxxt") pod "21db3ccc-3757-44b9-9f63-835f790c4321" (UID: "21db3ccc-3757-44b9-9f63-835f790c4321"). InnerVolumeSpecName "kube-api-access-kcxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.530486 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.530526 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870445 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerDied","Data":"426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580"} Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870496 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870711 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.349441 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.448572 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"cde91080-bc38-44b5-986f-6712c73de0ec\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.448767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"cde91080-bc38-44b5-986f-6712c73de0ec\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.451236 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cde91080-bc38-44b5-986f-6712c73de0ec" (UID: "cde91080-bc38-44b5-986f-6712c73de0ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.463272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h" (OuterVolumeSpecName: "kube-api-access-nv97h") pod "cde91080-bc38-44b5-986f-6712c73de0ec" (UID: "cde91080-bc38-44b5-986f-6712c73de0ec"). InnerVolumeSpecName "kube-api-access-nv97h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.550617 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.550646 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.597549 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.609056 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.621098 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.629646 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756654 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756721 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"33369def-50c6-4216-953b-e1848ff3a90a\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756807 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"4268e11c-c142-453b-a3c1-15696f9b21e5\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756920 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"33369def-50c6-4216-953b-e1848ff3a90a\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"c63ad167-cbf8-4da9-83c2-0c66566d7105\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757052 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"c63ad167-cbf8-4da9-83c2-0c66566d7105\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757083 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"4268e11c-c142-453b-a3c1-15696f9b21e5\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757136 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91bf7602-3edd-424d-a6a0-a5a1097fd3ba" (UID: "91bf7602-3edd-424d-a6a0-a5a1097fd3ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757675 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c63ad167-cbf8-4da9-83c2-0c66566d7105" (UID: "c63ad167-cbf8-4da9-83c2-0c66566d7105"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757807 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4268e11c-c142-453b-a3c1-15696f9b21e5" (UID: "4268e11c-c142-453b-a3c1-15696f9b21e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.758091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33369def-50c6-4216-953b-e1848ff3a90a" (UID: "33369def-50c6-4216-953b-e1848ff3a90a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.771275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz" (OuterVolumeSpecName: "kube-api-access-tk2wz") pod "91bf7602-3edd-424d-a6a0-a5a1097fd3ba" (UID: "91bf7602-3edd-424d-a6a0-a5a1097fd3ba"). InnerVolumeSpecName "kube-api-access-tk2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.773158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb" (OuterVolumeSpecName: "kube-api-access-4zgcb") pod "4268e11c-c142-453b-a3c1-15696f9b21e5" (UID: "4268e11c-c142-453b-a3c1-15696f9b21e5"). InnerVolumeSpecName "kube-api-access-4zgcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.774302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn" (OuterVolumeSpecName: "kube-api-access-mr4kn") pod "c63ad167-cbf8-4da9-83c2-0c66566d7105" (UID: "c63ad167-cbf8-4da9-83c2-0c66566d7105"). InnerVolumeSpecName "kube-api-access-mr4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.804251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd" (OuterVolumeSpecName: "kube-api-access-7rztd") pod "33369def-50c6-4216-953b-e1848ff3a90a" (UID: "33369def-50c6-4216-953b-e1848ff3a90a"). InnerVolumeSpecName "kube-api-access-7rztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859181 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859220 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859229 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859241 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859250 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859258 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859267 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerDied","Data":"62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887162 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887180 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889286 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerDied","Data":"8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889331 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889397 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891709 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerDied","Data":"eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891736 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891790 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.903954 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerDied","Data":"6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.904000 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.904077 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerDied","Data":"6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908100 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908164 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:32 crc kubenswrapper[5039]: I0130 13:25:32.274617 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.381034 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.382931 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.969720 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" exitCode=137 Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.969797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94"} Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.249165 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.249977 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.249996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250033 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250042 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250059 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250066 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250078 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250084 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250106 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250114 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250124 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250130 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250351 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250365 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250381 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250396 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250404 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250422 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.251213 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.254788 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.260055 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.260318 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.283983 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.371673 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372035 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372184 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372336 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.481086 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.481096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.484480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.492637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.567582 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:39 crc kubenswrapper[5039]: I0130 13:25:39.915760 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:39 crc kubenswrapper[5039]: W0130 13:25:39.918538 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b85bd45_6f76_4ac8_8df6_cdbb93636b44.slice/crio-60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199 WatchSource:0}: Error finding container 60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199: Status 404 returned error can't find the container with id 60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199 Jan 30 13:25:39 crc kubenswrapper[5039]: I0130 13:25:39.977997 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.042966 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerStarted","Data":"60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199"} Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.789917 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845520 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845600 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845734 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.846466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.846628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.851142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc" (OuterVolumeSpecName: "kube-api-access-tzwcc") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "kube-api-access-tzwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.851233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts" (OuterVolumeSpecName: "scripts") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.889766 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949421 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949473 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949486 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949494 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949501 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.950848 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.954063 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data" (OuterVolumeSpecName: "config-data") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.051735 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.051772 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.056573 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"268ed38d-d02d-4539-be5c-f461fde5d02b","Type":"ContainerStarted","Data":"116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9"} Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.059900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"f727d9eb39628ea5d3bfc94a0f16b684d39aab6c4c5b91405196bd7c1c2c942f"} Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.059964 5039 scope.go:117] "RemoveContainer" containerID="de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.060001 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.103512 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.25730528 podStartE2EDuration="17.103494095s" podCreationTimestamp="2026-01-30 13:25:24 +0000 UTC" firstStartedPulling="2026-01-30 13:25:25.696993085 +0000 UTC m=+1290.357674312" lastFinishedPulling="2026-01-30 13:25:40.5431819 +0000 UTC m=+1305.203863127" observedRunningTime="2026-01-30 13:25:41.070705212 +0000 UTC m=+1305.731386439" watchObservedRunningTime="2026-01-30 13:25:41.103494095 +0000 UTC m=+1305.764175322" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.112227 5039 scope.go:117] "RemoveContainer" containerID="ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.125260 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.137763 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.140992 5039 scope.go:117] "RemoveContainer" containerID="6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.147155 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155435 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155474 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155481 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155492 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155498 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155518 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155523 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155699 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155713 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155726 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155735 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.157299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.157894 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.161908 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.162120 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.171177 5039 scope.go:117] "RemoveContainer" containerID="12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256500 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359185 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359276 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359459 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359496 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359538 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359582 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.360403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.360657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.372145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.373307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.374192 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.376380 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.396098 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.482315 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.974679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:42 crc kubenswrapper[5039]: I0130 13:25:42.072367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"7447349b2940b6fe4ba0f0b6670367fa5bd036459156596b3c022012f2f8fde5"} Jan 30 13:25:42 crc kubenswrapper[5039]: I0130 13:25:42.107775 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" path="/var/lib/kubelet/pods/53390b3b-ff7d-4f71-8599-b1deebe3facf/volumes" Jan 30 13:25:43 crc kubenswrapper[5039]: I0130 13:25:43.083784 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.110331 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.110652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.177681 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.177940 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8654cc59b8-vwcl9" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" containerID="cri-o://edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" gracePeriod=30 Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.178439 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8654cc59b8-vwcl9" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" containerID="cri-o://a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.114005 5039 generic.go:334] "Generic (PLEG): container finished" podID="17a4f926-925d-44d3-855f-9387166c771b" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" exitCode=0 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.114061 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.542841 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.551389 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" containerID="cri-o://245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.551446 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" containerID="cri-o://dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: E0130 13:25:45.720409 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-conmon-245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.128252 5039 generic.go:334] "Generic (PLEG): container finished" podID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerID="245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" exitCode=143 Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.128303 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080"} Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.420027 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.423405 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" containerID="cri-o://fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" gracePeriod=30 Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.423579 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" containerID="cri-o://1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" gracePeriod=30 Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.139800 5039 generic.go:334] "Generic (PLEG): container finished" podID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerID="1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" exitCode=143 Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.139852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9"} Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.757742 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.185175 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.192509 5039 generic.go:334] "Generic (PLEG): container finished" podID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerID="dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" exitCode=0 Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.192592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.194750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerStarted","Data":"373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.270559 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.298477 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" podStartSLOduration=3.33117127 podStartE2EDuration="12.298456507s" podCreationTimestamp="2026-01-30 13:25:37 +0000 UTC" firstStartedPulling="2026-01-30 13:25:39.922192164 +0000 UTC m=+1304.582873401" lastFinishedPulling="2026-01-30 13:25:48.889477411 +0000 UTC m=+1313.550158638" observedRunningTime="2026-01-30 13:25:49.216333731 +0000 UTC m=+1313.877014958" watchObservedRunningTime="2026-01-30 13:25:49.298456507 +0000 UTC m=+1313.959137734" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325507 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325635 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325695 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325866 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325925 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.327790 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.333356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs" (OuterVolumeSpecName: "logs") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334141 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts" (OuterVolumeSpecName: "scripts") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv" (OuterVolumeSpecName: "kube-api-access-gqwhv") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "kube-api-access-gqwhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.372480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.386209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.399921 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data" (OuterVolumeSpecName: "config-data") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428299 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428340 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428349 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428357 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428390 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428399 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428408 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428420 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.447926 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.531233 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.221708 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.221817 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7"} Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.222265 5039 scope.go:117] "RemoveContainer" containerID="dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.236956 5039 generic.go:334] "Generic (PLEG): container finished" podID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerID="fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" exitCode=0 Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.237055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a"} Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.289782 5039 scope.go:117] "RemoveContainer" containerID="245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.297063 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.337982 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348091 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: E0130 13:25:50.348735 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348755 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: E0130 13:25:50.348772 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348778 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348970 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348987 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.349956 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.355176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.355428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.358227 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.431246 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.452931 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453045 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453081 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453288 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453518 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs" (OuterVolumeSpecName: "logs") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454101 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454880 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454973 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454989 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.455088 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.455099 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.469749 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.482090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts" (OuterVolumeSpecName: "scripts") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.487877 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct" (OuterVolumeSpecName: "kube-api-access-v66ct") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "kube-api-access-v66ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.525887 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557024 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557417 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557440 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557556 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557621 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557641 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557651 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557671 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557831 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.559532 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.566907 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.567248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.576557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.582958 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.587193 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.590896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.593131 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.593257 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data" (OuterVolumeSpecName: "config-data") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.620532 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.645980 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658516 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658545 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658562 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.742757 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819"} Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272647 5039 scope.go:117] "RemoveContainer" containerID="fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.320553 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.343527 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.343702 5039 scope.go:117] "RemoveContainer" containerID="1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.390862 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: E0130 13:25:51.391300 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391318 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: E0130 13:25:51.391348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391355 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391518 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391539 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.392401 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.395420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.395695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.422311 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.437928 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: W0130 13:25:51.446639 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75292c04_e484_4def_a16f_2d703409e49e.slice/crio-1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d WatchSource:0}: Error finding container 1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d: Status 404 returned error can't find the container with id 1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.580185 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.581021 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.581361 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.582935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.583846 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584253 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584290 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.685945 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686130 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686171 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.687170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.687211 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.688134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.693307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.695238 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.701134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.701401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.705444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.719918 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.012251 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.108859 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" path="/var/lib/kubelet/pods/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50/volumes" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.110053 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" path="/var/lib/kubelet/pods/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8/volumes" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.290633 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.290674 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.298864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299072 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" containerID="cri-o://44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299479 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299950 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" containerID="cri-o://a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.300028 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" containerID="cri-o://df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.300059 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" containerID="cri-o://1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.591389 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.329129697 podStartE2EDuration="11.591372245s" podCreationTimestamp="2026-01-30 13:25:41 +0000 UTC" firstStartedPulling="2026-01-30 13:25:41.979896767 +0000 UTC m=+1306.640578004" lastFinishedPulling="2026-01-30 13:25:51.242139325 +0000 UTC m=+1315.902820552" observedRunningTime="2026-01-30 13:25:52.3302833 +0000 UTC m=+1316.990964527" watchObservedRunningTime="2026-01-30 13:25:52.591372245 +0000 UTC m=+1317.252053472" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.595475 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:52 crc kubenswrapper[5039]: W0130 13:25:52.609333 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89cd9fbd_ac74_45c9_bdd8_fe3268a9147e.slice/crio-f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692 WatchSource:0}: Error finding container f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692: Status 404 returned error can't find the container with id f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.291186 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318343 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318386 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" exitCode=2 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318397 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318407 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318483 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318531 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"7447349b2940b6fe4ba0f0b6670367fa5bd036459156596b3c022012f2f8fde5"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318568 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318713 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.344732 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.354365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.354420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.379704 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.379683274 podStartE2EDuration="3.379683274s" podCreationTimestamp="2026-01-30 13:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:53.373666775 +0000 UTC m=+1318.034348002" watchObservedRunningTime="2026-01-30 13:25:53.379683274 +0000 UTC m=+1318.040364491" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429752 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430069 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430214 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430240 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431197 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431413 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.434627 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts" (OuterVolumeSpecName: "scripts") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.438175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z" (OuterVolumeSpecName: "kube-api-access-rv44z") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "kube-api-access-rv44z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.458509 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.459028 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.483761 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517191 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.517667 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517718 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517749 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.518171 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518205 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518224 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.518598 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518637 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518667 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.519272 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519310 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519328 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519680 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519705 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.520319 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.520346 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521346 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521372 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521592 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521626 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521864 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521909 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.523578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.524616 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.524660 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526189 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526217 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526469 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526489 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526725 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526742 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526920 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526937 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527155 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527172 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527714 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532319 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532342 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532350 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532359 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532368 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532376 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.548861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data" (OuterVolumeSpecName: "config-data") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.636346 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.727526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.738294 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.752524 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.752965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.752989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753020 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753029 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753040 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753048 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753082 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753089 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753320 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753338 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.756673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.759259 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.759547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.764053 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.891719 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941137 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941259 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941290 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941311 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941341 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043647 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043743 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044319 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044434 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044484 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.045517 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.047155 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.047847 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-gsrfb scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="bab78ba9-ad09-4d06-8a77-e52b7193509d" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.052527 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.052841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.055275 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.056116 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.056979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.057214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v" (OuterVolumeSpecName: "kube-api-access-pgq8v") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "kube-api-access-pgq8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.082855 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.106160 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" path="/var/lib/kubelet/pods/f4991c7a-c91c-4684-be02-b3d7d365fdb6/volumes" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.106159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.120466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config" (OuterVolumeSpecName: "config") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.143636 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145763 5039 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145793 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145805 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145813 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145821 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365545 5039 generic.go:334] "Generic (PLEG): container finished" podID="17a4f926-925d-44d3-855f-9387166c771b" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" exitCode=0 Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.366983 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"57c4193e105db2951823832bbd2267125caa477cceaaea4fe9af929c3b05c7a4"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.367131 5039 scope.go:117] "RemoveContainer" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365742 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.371810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.372872 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.388118 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.411990 5039 scope.go:117] "RemoveContainer" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.413653 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.41360835 podStartE2EDuration="3.41360835s" podCreationTimestamp="2026-01-30 13:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:54.393450459 +0000 UTC m=+1319.054131716" watchObservedRunningTime="2026-01-30 13:25:54.41360835 +0000 UTC m=+1319.074289577" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431506 5039 scope.go:117] "RemoveContainer" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431566 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.431822 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": container with ID starting with a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e not found: ID does not exist" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431876 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} err="failed to get container status \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": rpc error: code = NotFound desc = could not find container \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": container with ID starting with a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e not found: ID does not exist" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431906 5039 scope.go:117] "RemoveContainer" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.432202 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": container with ID starting with edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd not found: ID does not exist" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.432242 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} err="failed to get container status \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": rpc error: code = NotFound desc = could not find container \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": container with ID starting with edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd not found: ID does not exist" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.446039 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553331 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553416 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553511 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553583 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553612 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.555398 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.556913 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.559092 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data" (OuterVolumeSpecName: "config-data") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560715 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb" (OuterVolumeSpecName: "kube-api-access-gsrfb") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "kube-api-access-gsrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.565259 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts" (OuterVolumeSpecName: "scripts") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656445 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656479 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656492 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656503 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656513 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656525 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656535 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.384410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.444980 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.459375 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483040 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.483375 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483392 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.483412 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483431 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483596 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483620 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.485188 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.487704 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.493140 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.498542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.612832 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.613576 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-kvqt9 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="0c02d321-ce8d-44b5-b3ec-f85c322108c6" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.677997 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678063 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678121 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678142 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678212 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779222 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779246 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.780487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.781451 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.784906 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.785341 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.785688 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.788120 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.805858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.107965 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a4f926-925d-44d3-855f-9387166c771b" path="/var/lib/kubelet/pods/17a4f926-925d-44d3-855f-9387166c771b/volumes" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.108680 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab78ba9-ad09-4d06-8a77-e52b7193509d" path="/var/lib/kubelet/pods/bab78ba9-ad09-4d06-8a77-e52b7193509d/volumes" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.393613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.405075 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491346 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491560 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491716 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.493298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.493399 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.496536 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data" (OuterVolumeSpecName: "config-data") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.496770 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.498762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.499148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts" (OuterVolumeSpecName: "scripts") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.499235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9" (OuterVolumeSpecName: "kube-api-access-kvqt9") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "kube-api-access-kvqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595451 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595501 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595519 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595537 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595555 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595572 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595587 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.408132 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.494796 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.525250 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.556136 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.559164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.563638 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.563922 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.573675 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720524 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720867 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720995 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721041 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721111 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823272 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823532 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823891 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.829063 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.829809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.830280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.835207 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.843381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.879488 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.105178 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c02d321-ce8d-44b5-b3ec-f85c322108c6" path="/var/lib/kubelet/pods/0c02d321-ce8d-44b5-b3ec-f85c322108c6/volumes" Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.312481 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:58 crc kubenswrapper[5039]: W0130 13:25:58.319188 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod057686b7_2fdb_4f7d_a405_356cf4e7dbe2.slice/crio-f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3 WatchSource:0}: Error finding container f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3: Status 404 returned error can't find the container with id f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3 Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.418113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3"} Jan 30 13:25:59 crc kubenswrapper[5039]: I0130 13:25:59.430758 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31"} Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.453762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735"} Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.743833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.743891 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.794714 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.804727 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.465260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea"} Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.467596 5039 generic.go:334] "Generic (PLEG): container finished" podID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerID="373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301" exitCode=0 Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.467692 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerDied","Data":"373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301"} Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.468081 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.468107 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.012894 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.013225 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.063045 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.064476 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.477138 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.477297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.862544 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924955 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.925210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.932278 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts" (OuterVolumeSpecName: "scripts") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.932307 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n" (OuterVolumeSpecName: "kube-api-access-6gt8n") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "kube-api-access-6gt8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.961100 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data" (OuterVolumeSpecName: "config-data") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.973096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026665 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026693 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026703 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026711 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerDied","Data":"60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199"} Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484688 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484388 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.487496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74"} Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.532905 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.648256913 podStartE2EDuration="6.532880736s" podCreationTimestamp="2026-01-30 13:25:57 +0000 UTC" firstStartedPulling="2026-01-30 13:25:58.321187368 +0000 UTC m=+1322.981868595" lastFinishedPulling="2026-01-30 13:26:03.205811191 +0000 UTC m=+1327.866492418" observedRunningTime="2026-01-30 13:26:03.516188926 +0000 UTC m=+1328.176870163" watchObservedRunningTime="2026-01-30 13:26:03.532880736 +0000 UTC m=+1328.193561963" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.583612 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:03 crc kubenswrapper[5039]: E0130 13:26:03.584089 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.584106 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.584278 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.585223 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.600988 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.618774 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.620698 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737520 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737728 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.745508 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.745632 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.839995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.840169 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.840227 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.845637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.847657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.859425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.913195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.981940 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.433994 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:04 crc kubenswrapper[5039]: W0130 13:26:04.436928 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449 WatchSource:0}: Error finding container fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449: Status 404 returned error can't find the container with id fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449 Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerStarted","Data":"fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449"} Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514533 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514839 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.906730 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.005561 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.524924 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerStarted","Data":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.525248 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.551262 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.551244113 podStartE2EDuration="2.551244113s" podCreationTimestamp="2026-01-30 13:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:05.543879639 +0000 UTC m=+1330.204560866" watchObservedRunningTime="2026-01-30 13:26:05.551244113 +0000 UTC m=+1330.211925340" Jan 30 13:26:12 crc kubenswrapper[5039]: I0130 13:26:12.955332 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:12 crc kubenswrapper[5039]: I0130 13:26:12.956079 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" gracePeriod=30 Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.966492 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.969450 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.977355 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.977445 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.916118 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.917432 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.918851 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.918979 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.512922 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513255 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" containerID="cri-o://1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513599 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" containerID="cri-o://92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513624 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" containerID="cri-o://81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513602 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" containerID="cri-o://223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.524475 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.179:3000/\": EOF" Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634351 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" exitCode=0 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634680 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" exitCode=2 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634692 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" exitCode=0 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74"} Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634730 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea"} Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634746 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31"} Jan 30 13:26:16 crc kubenswrapper[5039]: E0130 13:26:16.496162 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-conmon-5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.596693 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646595 5039 generic.go:334] "Generic (PLEG): container finished" podID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" exitCode=0 Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646643 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646647 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerDied","Data":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646752 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerDied","Data":"fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449"} Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646776 5039 scope.go:117] "RemoveContainer" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.681323 5039 scope.go:117] "RemoveContainer" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: E0130 13:26:16.682755 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": container with ID starting with 5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c not found: ID does not exist" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.682823 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} err="failed to get container status \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": rpc error: code = NotFound desc = could not find container \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": container with ID starting with 5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c not found: ID does not exist" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744394 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744494 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.750142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx" (OuterVolumeSpecName: "kube-api-access-85qpx") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "kube-api-access-85qpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.772037 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.772090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data" (OuterVolumeSpecName: "config-data") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.846971 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.847395 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.847410 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.978087 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.989621 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.008260 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: E0130 13:26:17.008724 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.008748 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.009030 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.009841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.013267 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.013430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.027860 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.151910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.152098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.152157 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253246 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253545 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.257924 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.258043 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.270801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.386688 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.659890 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" exitCode=0 Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.659970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735"} Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.839354 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.103482 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" path="/var/lib/kubelet/pods/13d7ec53-b996-4c36-ad56-865d8f7e0a6b/volumes" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerStarted","Data":"15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678423 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerStarted","Data":"08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678517 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.680147 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.680166 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.699257 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.699241312 podStartE2EDuration="2.699241312s" podCreationTimestamp="2026-01-30 13:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:18.692444453 +0000 UTC m=+1343.353125720" watchObservedRunningTime="2026-01-30 13:26:18.699241312 +0000 UTC m=+1343.359922529" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.744033 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889742 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889799 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889881 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889957 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.890034 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.890812 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.893087 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.895926 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts" (OuterVolumeSpecName: "scripts") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.896549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd" (OuterVolumeSpecName: "kube-api-access-njhgd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "kube-api-access-njhgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.915747 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.976108 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992379 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992418 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992429 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992439 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992451 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992464 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.995273 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data" (OuterVolumeSpecName: "config-data") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.094996 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.690205 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.733147 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.746919 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769112 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769495 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769511 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769535 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769543 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769566 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769587 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769594 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769771 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769783 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769795 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.787371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.792561 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.792800 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.799186 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911094 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911338 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911389 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911510 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911538 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013226 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013313 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013366 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013817 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.014968 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.018802 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.019725 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.020190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.020302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.041312 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.148649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.160270 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" path="/var/lib/kubelet/pods/057686b7-2fdb-4f7d-a405-356cf4e7dbe2/volumes" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.743141 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:20 crc kubenswrapper[5039]: W0130 13:26:20.748725 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34fa3bab_3684_4d07_baa6_e0cc08076a98.slice/crio-c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9 WatchSource:0}: Error finding container c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9: Status 404 returned error can't find the container with id c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9 Jan 30 13:26:21 crc kubenswrapper[5039]: I0130 13:26:21.708860 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5"} Jan 30 13:26:21 crc kubenswrapper[5039]: I0130 13:26:21.709174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9"} Jan 30 13:26:22 crc kubenswrapper[5039]: I0130 13:26:22.718455 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a"} Jan 30 13:26:22 crc kubenswrapper[5039]: I0130 13:26:22.718976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114"} Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.771649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026"} Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.772173 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.797988 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.468213212 podStartE2EDuration="6.797971804s" podCreationTimestamp="2026-01-30 13:26:19 +0000 UTC" firstStartedPulling="2026-01-30 13:26:20.752078277 +0000 UTC m=+1345.412759504" lastFinishedPulling="2026-01-30 13:26:25.081836859 +0000 UTC m=+1349.742518096" observedRunningTime="2026-01-30 13:26:25.796439574 +0000 UTC m=+1350.457120821" watchObservedRunningTime="2026-01-30 13:26:25.797971804 +0000 UTC m=+1350.458653051" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.438839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.916282 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.917836 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.921116 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.921157 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.928399 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018304 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018482 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.104861 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.106767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.108916 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.116865 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.118327 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119824 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119851 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119909 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.121746 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.129346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.129571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.131854 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.133530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.176252 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.186848 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221455 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221530 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221548 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221564 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221659 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.235680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.276971 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.278158 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.293401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326239 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326316 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326349 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326394 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326493 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326517 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.349135 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.349201 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.351124 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.357366 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.394263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.396477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.398773 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.408183 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.420710 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.428394 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.436050 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.456542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476674 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476734 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476987 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.477052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.477112 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.516247 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.518505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.534525 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.539031 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578843 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578882 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578904 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578971 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578990 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579028 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579086 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.580538 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.580796 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.581915 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.583422 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599803 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.615670 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.630881 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680715 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.686304 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.686833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.691839 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.697658 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.721740 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.795804 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.807448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerStarted","Data":"97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7"} Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.815844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.864356 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.135987 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: W0130 13:26:29.317612 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a48b8a3_8b16_40e1_ac55_42da14c30bd0.slice/crio-6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be WatchSource:0}: Error finding container 6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be: Status 404 returned error can't find the container with id 6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.324067 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.525300 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.527390 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.532547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.533248 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.541156 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.557177 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606366 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.650717 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.708127 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.708926 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.713505 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.713988 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.714055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.718095 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.718658 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.733335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.803263 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.829272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerStarted","Data":"3ff4ccd8aaa697d5a1f8ebe9b67db4e13a645b644142dcd95f3ce3860b9a6f4c"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.830702 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.833481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"7f560ccfb5a760b5efc927b2cc96714a9642354fca2eb632be3627c3a05002d0"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.840678 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerStarted","Data":"94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.843718 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerStarted","Data":"e377439dbc21dc2a1a80acc7def57d1cdb0245ec6918d6164a209411bf3828b9"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.859352 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x4sxn" podStartSLOduration=2.859334837 podStartE2EDuration="2.859334837s" podCreationTimestamp="2026-01-30 13:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:29.85715576 +0000 UTC m=+1354.517836987" watchObservedRunningTime="2026-01-30 13:26:29.859334837 +0000 UTC m=+1354.520016064" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.898086 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.376055 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.853827 5039 generic.go:334] "Generic (PLEG): container finished" podID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerID="ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954" exitCode=0 Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.853914 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954"} Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.855764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerStarted","Data":"17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2"} Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.856847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerStarted","Data":"b436fdfc1099bd27ec4332adf57351d857bb70111f10d9522a0889ec544a5587"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.869625 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerStarted","Data":"9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.869969 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.872668 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerStarted","Data":"f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.904930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podStartSLOduration=3.904913081 podStartE2EDuration="3.904913081s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:31.891060016 +0000 UTC m=+1356.551741273" watchObservedRunningTime="2026-01-30 13:26:31.904913081 +0000 UTC m=+1356.565594308" Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.932954 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zctpf" podStartSLOduration=2.93293065 podStartE2EDuration="2.93293065s" podCreationTimestamp="2026-01-30 13:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:31.910794586 +0000 UTC m=+1356.571475823" watchObservedRunningTime="2026-01-30 13:26:31.93293065 +0000 UTC m=+1356.593611897" Jan 30 13:26:32 crc kubenswrapper[5039]: I0130 13:26:32.377250 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:32 crc kubenswrapper[5039]: I0130 13:26:32.394484 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.928790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerStarted","Data":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.929507 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" gracePeriod=30 Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.943433 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.945621 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerStarted","Data":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.957253 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.636214605 podStartE2EDuration="8.957234939s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.818197682 +0000 UTC m=+1354.478878909" lastFinishedPulling="2026-01-30 13:26:36.139218016 +0000 UTC m=+1360.799899243" observedRunningTime="2026-01-30 13:26:36.955238766 +0000 UTC m=+1361.615920003" watchObservedRunningTime="2026-01-30 13:26:36.957234939 +0000 UTC m=+1361.617916166" Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.970337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.981570 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.391347977 podStartE2EDuration="8.98154343s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.549123656 +0000 UTC m=+1354.209804883" lastFinishedPulling="2026-01-30 13:26:36.139319109 +0000 UTC m=+1360.800000336" observedRunningTime="2026-01-30 13:26:36.970891529 +0000 UTC m=+1361.631572766" watchObservedRunningTime="2026-01-30 13:26:36.98154343 +0000 UTC m=+1361.642224667" Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.982877 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501"} Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.986745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.987203 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" containerID="cri-o://f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" gracePeriod=30 Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.987250 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" containerID="cri-o://7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" gracePeriod=30 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.008466 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.102368678 podStartE2EDuration="10.008436381s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.233168314 +0000 UTC m=+1353.893849541" lastFinishedPulling="2026-01-30 13:26:36.139236027 +0000 UTC m=+1360.799917244" observedRunningTime="2026-01-30 13:26:38.006593082 +0000 UTC m=+1362.667274339" watchObservedRunningTime="2026-01-30 13:26:38.008436381 +0000 UTC m=+1362.669117628" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.584548 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.584900 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.722301 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.722346 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.796912 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.797160 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.818144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.831580 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.845818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.028905881 podStartE2EDuration="10.845801552s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.325695584 +0000 UTC m=+1353.986376811" lastFinishedPulling="2026-01-30 13:26:36.142591255 +0000 UTC m=+1360.803272482" observedRunningTime="2026-01-30 13:26:38.039174591 +0000 UTC m=+1362.699855808" watchObservedRunningTime="2026-01-30 13:26:38.845801552 +0000 UTC m=+1363.506482779" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.868213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.908233 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.908504 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" containerID="cri-o://c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" gracePeriod=10 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.996672 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" exitCode=143 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.996939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.039536 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.627243 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.669203 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.974675 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006417 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006483 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006525 5039 scope.go:117] "RemoveContainer" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006630 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.008895 5039 generic.go:334] "Generic (PLEG): container finished" podID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerID="94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.008956 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerDied","Data":"94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.025442 5039 generic.go:334] "Generic (PLEG): container finished" podID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerID="c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.026355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.057910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058068 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058232 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.059250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs" (OuterVolumeSpecName: "logs") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.064409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9" (OuterVolumeSpecName: "kube-api-access-mb4g9") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "kube-api-access-mb4g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.085729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data" (OuterVolumeSpecName: "config-data") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.094216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162420 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162449 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162459 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162467 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.164718 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.177304 5039 scope.go:117] "RemoveContainer" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.208669 5039 scope.go:117] "RemoveContainer" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.209172 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": container with ID starting with 7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb not found: ID does not exist" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.209217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} err="failed to get container status \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": rpc error: code = NotFound desc = could not find container \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": container with ID starting with 7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb not found: ID does not exist" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.209245 5039 scope.go:117] "RemoveContainer" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.213478 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": container with ID starting with f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5 not found: ID does not exist" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.213507 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} err="failed to get container status \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": rpc error: code = NotFound desc = could not find container \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": container with ID starting with f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5 not found: ID does not exist" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263831 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263925 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263985 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.264126 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.270202 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4" (OuterVolumeSpecName: "kube-api-access-gzwc4") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "kube-api-access-gzwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.314538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.327836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.334313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config" (OuterVolumeSpecName: "config") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.340666 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.352522 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.356582 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.361992 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365917 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365950 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365960 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365970 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365979 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365990 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380082 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380608 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380634 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380666 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380676 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380702 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380710 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380722 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="init" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="init" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380957 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380990 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.381006 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.382260 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.389735 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.389828 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.398236 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.468054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570420 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.571051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.574761 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.575882 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.576208 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.593272 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.710030 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.065933 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.065923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb"} Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.067111 5039 scope.go:117] "RemoveContainer" containerID="c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.123406 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.145219 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.183259 5039 scope.go:117] "RemoveContainer" containerID="7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.265549 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:41 crc kubenswrapper[5039]: W0130 13:26:41.478782 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0ead48_6db3_49aa_9748_c6acb8b64848.slice/crio-cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf WatchSource:0}: Error finding container cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf: Status 404 returned error can't find the container with id cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.635520 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.692794 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.692956 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.693061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.693109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.703256 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5" (OuterVolumeSpecName: "kube-api-access-lnhw5") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "kube-api-access-lnhw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.703997 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts" (OuterVolumeSpecName: "scripts") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.738216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data" (OuterVolumeSpecName: "config-data") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.756776 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796046 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796394 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796483 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796589 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079376 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerDied","Data":"97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083542 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083051 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.114733 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" path="/var/lib/kubelet/pods/2a48b8a3-8b16-40e1-ac55-42da14c30bd0/volumes" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.120407 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" path="/var/lib/kubelet/pods/3c796c5f-b2e9-4a42-af9c-14b03c99d213/volumes" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243520 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243777 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" containerID="cri-o://6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243874 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" containerID="cri-o://6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.274617 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.274878 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" containerID="cri-o://ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.287166 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.095688 5039 generic.go:334] "Generic (PLEG): container finished" podID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerID="6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" exitCode=143 Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.095747 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32"} Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.123132 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.12311249 podStartE2EDuration="3.12311249s" podCreationTimestamp="2026-01-30 13:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:43.115213302 +0000 UTC m=+1367.775894549" watchObservedRunningTime="2026-01-30 13:26:43.12311249 +0000 UTC m=+1367.783793727" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.574701 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640118 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.645911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc" (OuterVolumeSpecName: "kube-api-access-pckpc") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "kube-api-access-pckpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.671304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.671762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data" (OuterVolumeSpecName: "config-data") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742510 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742558 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742573 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106412 5039 generic.go:334] "Generic (PLEG): container finished" podID="022559da-3027-4afc-ac6d-545384ef449f" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" exitCode=0 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106471 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerDied","Data":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerDied","Data":"3ff4ccd8aaa697d5a1f8ebe9b67db4e13a645b644142dcd95f3ce3860b9a6f4c"} Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106564 5039 scope.go:117] "RemoveContainer" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.107099 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" containerID="cri-o://9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" gracePeriod=30 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.107275 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" containerID="cri-o://da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" gracePeriod=30 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.130805 5039 scope.go:117] "RemoveContainer" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.131556 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": container with ID starting with ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0 not found: ID does not exist" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.131612 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} err="failed to get container status \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": rpc error: code = NotFound desc = could not find container \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": container with ID starting with ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0 not found: ID does not exist" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.162133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.171587 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183068 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.183574 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183595 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.183619 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183626 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183800 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183826 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.184426 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.189303 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.196323 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250105 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250162 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250354 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352164 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352227 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.358221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.358280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.374651 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.507265 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.683720 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.759935 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760172 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.761451 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs" (OuterVolumeSpecName: "logs") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.775253 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb" (OuterVolumeSpecName: "kube-api-access-6wmqb") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "kube-api-access-6wmqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.848025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data" (OuterVolumeSpecName: "config-data") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862468 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862498 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862508 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.923173 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.944682 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.965368 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.965404 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.999026 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: W0130 13:26:44.999809 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2c4ea7_fb7f_401c_84c3_13cb59dec51d.slice/crio-5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44 WatchSource:0}: Error finding container 5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44: Status 404 returned error can't find the container with id 5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117945 5039 generic.go:334] "Generic (PLEG): container finished" podID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" exitCode=0 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117983 5039 generic.go:334] "Generic (PLEG): container finished" podID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" exitCode=143 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118048 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118005 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118063 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.119289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerStarted","Data":"5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.136903 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.159183 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.159681 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.160165 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160200 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} err="failed to get container status \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160223 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.160562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160595 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} err="failed to get container status \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160615 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160932 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} err="failed to get container status \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160953 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.161162 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} err="failed to get container status \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.171844 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187054 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.187603 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187623 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.187641 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187651 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187856 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187884 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.189243 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.192306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.192398 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.198495 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270808 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270981 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.271086 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373493 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373694 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373773 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.374930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.379635 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.379691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.385770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.402202 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.581307 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.045646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:46 crc kubenswrapper[5039]: W0130 13:26:46.046973 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb54f17_1620_4d7f_9fef_b9be9740a158.slice/crio-637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443 WatchSource:0}: Error finding container 637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443: Status 404 returned error can't find the container with id 637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443 Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.122219 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022559da-3027-4afc-ac6d-545384ef449f" path="/var/lib/kubelet/pods/022559da-3027-4afc-ac6d-545384ef449f/volumes" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.126647 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" path="/var/lib/kubelet/pods/be0ead48-6db3-49aa-9748-c6acb8b64848/volumes" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.153966 5039 generic.go:334] "Generic (PLEG): container finished" podID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerID="6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" exitCode=0 Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.154159 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.173785 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerStarted","Data":"77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.181637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.191726 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.191708233 podStartE2EDuration="2.191708233s" podCreationTimestamp="2026-01-30 13:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:46.189295069 +0000 UTC m=+1370.849976296" watchObservedRunningTime="2026-01-30 13:26:46.191708233 +0000 UTC m=+1370.852389470" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.576748 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595626 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595700 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.597028 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs" (OuterVolumeSpecName: "logs") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.600938 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2" (OuterVolumeSpecName: "kube-api-access-zw5f2") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "kube-api-access-zw5f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.640461 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data" (OuterVolumeSpecName: "config-data") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.640721 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698163 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698207 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698223 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698235 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202475 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"7f560ccfb5a760b5efc927b2cc96714a9642354fca2eb632be3627c3a05002d0"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202842 5039 scope.go:117] "RemoveContainer" containerID="6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.207526 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.207567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.234911 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.234894263 podStartE2EDuration="2.234894263s" podCreationTimestamp="2026-01-30 13:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:47.228683699 +0000 UTC m=+1371.889364926" watchObservedRunningTime="2026-01-30 13:26:47.234894263 +0000 UTC m=+1371.895575490" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.239679 5039 scope.go:117] "RemoveContainer" containerID="6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.309204 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.320047 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330053 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: E0130 13:26:47.330451 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330471 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: E0130 13:26:47.330503 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330511 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330680 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330702 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.331816 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.335846 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.344131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.419647 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.419771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.420091 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.420189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521225 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521247 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.522137 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.526668 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.533771 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.539956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.667090 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.104668 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" path="/var/lib/kubelet/pods/09d17bda-c976-4bfb-96cc-24ae462b0e72/volumes" Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.140343 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.223467 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"bf1f32b5656cbd0ec0a02e133a8fd538c702e03de684cfb3027704d645025a94"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.239735 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.240620 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.282696 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.282657835 podStartE2EDuration="2.282657835s" podCreationTimestamp="2026-01-30 13:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:49.26505446 +0000 UTC m=+1373.925735788" watchObservedRunningTime="2026-01-30 13:26:49.282657835 +0000 UTC m=+1373.943339112" Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.507546 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.153192 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.582299 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.582422 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.507991 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.539558 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.648083 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.648296 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" containerID="cri-o://4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" gracePeriod=30 Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.145234 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.269132 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.276215 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc" (OuterVolumeSpecName: "kube-api-access-qpzvc") pod "644a9c77-bad0-41fe-a6ee-8bb5e6580f87" (UID: "644a9c77-bad0-41fe-a6ee-8bb5e6580f87"). InnerVolumeSpecName "kube-api-access-qpzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303030 5039 generic.go:334] "Generic (PLEG): container finished" podID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" exitCode=2 Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303092 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerDied","Data":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303187 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerDied","Data":"b53ad32cffda3e64e7114afbc8bd65ade81ee83922eb3d85365175d255be376d"} Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303216 5039 scope.go:117] "RemoveContainer" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.334865 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.356741 5039 scope.go:117] "RemoveContainer" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: E0130 13:26:55.357239 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": container with ID starting with 4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730 not found: ID does not exist" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.357278 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} err="failed to get container status \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": rpc error: code = NotFound desc = could not find container \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": container with ID starting with 4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730 not found: ID does not exist" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.357360 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.370490 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.372057 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382208 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: E0130 13:26:55.382738 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382759 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382981 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.384342 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.387466 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.391398 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.419679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575631 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575692 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575727 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575773 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.581686 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.581746 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677634 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677707 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677744 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677796 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.682841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.684456 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.685485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.696659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.710764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.111727 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" path="/var/lib/kubelet/pods/644a9c77-bad0-41fe-a6ee-8bb5e6580f87/volumes" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.294823 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.328764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerStarted","Data":"e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32"} Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487467 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487788 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" containerID="cri-o://1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487904 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" containerID="cri-o://bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487937 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" containerID="cri-o://977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487963 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" containerID="cri-o://601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.595212 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.595219 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349300 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349560 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" exitCode=2 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349568 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349376 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349628 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.351669 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerStarted","Data":"cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.352656 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.353777 5039 generic.go:334] "Generic (PLEG): container finished" podID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerID="f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.353800 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerDied","Data":"f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.375370 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.018545209 podStartE2EDuration="2.375354549s" podCreationTimestamp="2026-01-30 13:26:55 +0000 UTC" firstStartedPulling="2026-01-30 13:26:56.318517779 +0000 UTC m=+1380.979199006" lastFinishedPulling="2026-01-30 13:26:56.675327119 +0000 UTC m=+1381.336008346" observedRunningTime="2026-01-30 13:26:57.370971584 +0000 UTC m=+1382.031652811" watchObservedRunningTime="2026-01-30 13:26:57.375354549 +0000 UTC m=+1382.036035776" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.667222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.667271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.752228 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.752239 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.764818 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953326 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953579 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.971182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts" (OuterVolumeSpecName: "scripts") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.972227 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml" (OuterVolumeSpecName: "kube-api-access-gp6ml") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "kube-api-access-gp6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.984788 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data" (OuterVolumeSpecName: "config-data") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.993963 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055404 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055442 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055454 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055464 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.373519 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.373550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerDied","Data":"17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2"} Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.374532 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.477445 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:26:59 crc kubenswrapper[5039]: E0130 13:26:59.477929 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.477959 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.478214 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.478969 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.481091 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.497574 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665631 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665704 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665797 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.771454 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.771970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.783508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.797241 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.305445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:27:00 crc kubenswrapper[5039]: W0130 13:27:00.324868 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod798d080c_2565_4410_9cda_220d1154b8de.slice/crio-ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27 WatchSource:0}: Error finding container ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27: Status 404 returned error can't find the container with id ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27 Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.453542 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" exitCode=0 Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.453670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114"} Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.456313 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerStarted","Data":"ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27"} Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.619343 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785690 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785779 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785879 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785963 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786037 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786107 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786122 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786556 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786796 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.790265 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts" (OuterVolumeSpecName: "scripts") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.791223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl" (OuterVolumeSpecName: "kube-api-access-mv5dl") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "kube-api-access-mv5dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.825128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.882301 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888606 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888651 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888662 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888670 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888678 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.904203 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data" (OuterVolumeSpecName: "config-data") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.990843 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.469761 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerStarted","Data":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.471295 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.476920 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9"} Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.477045 5039 scope.go:117] "RemoveContainer" containerID="bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.477263 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.514095 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.514067972 podStartE2EDuration="2.514067972s" podCreationTimestamp="2026-01-30 13:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:01.497231958 +0000 UTC m=+1386.157913215" watchObservedRunningTime="2026-01-30 13:27:01.514067972 +0000 UTC m=+1386.174749229" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.536313 5039 scope.go:117] "RemoveContainer" containerID="977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.560605 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.563405 5039 scope.go:117] "RemoveContainer" containerID="601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.588051 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595429 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.595920 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595936 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.595956 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595965 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.596031 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596043 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.596054 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596064 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596320 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596332 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596364 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.598713 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.607833 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.619560 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.620107 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.620335 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.640620 5039 scope.go:117] "RemoveContainer" containerID="1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.719876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.719948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720068 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720141 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720285 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822779 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822810 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.823221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.829204 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.830210 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.831087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.837414 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.844682 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.845604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.945944 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.105789 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" path="/var/lib/kubelet/pods/34fa3bab-3684-4d07-baa6-e0cc08076a98/volumes" Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.425695 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.487876 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"6614bbaf0c08cdbd12c87d26109fdd7fc2758ee316f7840dad0ab9d434c19a76"} Jan 30 13:27:03 crc kubenswrapper[5039]: I0130 13:27:03.518307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} Jan 30 13:27:04 crc kubenswrapper[5039]: I0130 13:27:04.538273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.549799 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.589098 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.589494 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.597094 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.727179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 13:27:06 crc kubenswrapper[5039]: I0130 13:27:06.567340 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.358083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.443091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6" (OuterVolumeSpecName: "kube-api-access-8dlr6") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "kube-api-access-8dlr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.466157 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data podName:646b9fca-b2a5-414b-9b06-3a78ad1df6b0 nodeName:}" failed. No retries permitted until 2026-01-30 13:27:07.966131715 +0000 UTC m=+1392.626812942 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0") : error deleting /var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volume-subpaths: remove /var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volume-subpaths: no such file or directory Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.468353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.535511 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.535553 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.571502 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.572185 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575266 5039 generic.go:334] "Generic (PLEG): container finished" podID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" exitCode=137 Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575302 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575335 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerDied","Data":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerDied","Data":"b436fdfc1099bd27ec4332adf57351d857bb70111f10d9522a0889ec544a5587"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575381 5039 scope.go:117] "RemoveContainer" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.599132 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.908922097 podStartE2EDuration="6.599114542s" podCreationTimestamp="2026-01-30 13:27:01 +0000 UTC" firstStartedPulling="2026-01-30 13:27:02.443312538 +0000 UTC m=+1387.103993775" lastFinishedPulling="2026-01-30 13:27:07.133504993 +0000 UTC m=+1391.794186220" observedRunningTime="2026-01-30 13:27:07.593216677 +0000 UTC m=+1392.253897924" watchObservedRunningTime="2026-01-30 13:27:07.599114542 +0000 UTC m=+1392.259795779" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.617309 5039 scope.go:117] "RemoveContainer" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.618193 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": container with ID starting with 0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b not found: ID does not exist" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.618250 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} err="failed to get container status \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": rpc error: code = NotFound desc = could not find container \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": container with ID starting with 0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b not found: ID does not exist" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674308 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674632 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.679948 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.683316 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.868828 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.869455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.869472 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.869668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.870530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.884997 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955461 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955515 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955669 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.056867 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057279 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057448 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057480 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057570 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.058769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.058887 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.059232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.059640 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.060299 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.062557 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data" (OuterVolumeSpecName: "config-data") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.076254 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.159353 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.207884 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.214093 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.227805 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.258081 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.259261 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.266796 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.266830 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.267463 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.272520 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.364998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365140 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365188 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467312 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467615 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467744 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.473487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.474766 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.477078 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.489453 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.493476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.658478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: W0130 13:27:08.812084 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f702130_7802_4f11_96ff_b51a7edf7740.slice/crio-ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028 WatchSource:0}: Error finding container ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028: Status 404 returned error can't find the container with id ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028 Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.815505 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.961450 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.603271 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerStarted","Data":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.603511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerStarted","Data":"c8546343d44020f12aa855ac05ab8a9543bb3d9f88991b1f497d0bbf8b9309dc"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.608140 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f702130-7802-4f11-96ff-b51a7edf7740" containerID="5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242" exitCode=0 Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.609301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.609397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerStarted","Data":"ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.666185 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.666168383 podStartE2EDuration="1.666168383s" podCreationTimestamp="2026-01-30 13:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:09.631186671 +0000 UTC m=+1394.291867908" watchObservedRunningTime="2026-01-30 13:27:09.666168383 +0000 UTC m=+1394.326849610" Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.831542 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.107614 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" path="/var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volumes" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220643 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" containerID="cri-o://7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" containerID="cri-o://c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220948 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" containerID="cri-o://3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220874 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" containerID="cri-o://30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619857 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" exitCode=0 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619892 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" exitCode=2 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619899 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" exitCode=0 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619972 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619985 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.625330 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerStarted","Data":"73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.625405 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653167 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653353 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" containerID="cri-o://cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653522 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" containerID="cri-o://f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.664529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podStartSLOduration=3.664512651 podStartE2EDuration="3.664512651s" podCreationTimestamp="2026-01-30 13:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:10.662368094 +0000 UTC m=+1395.323049321" watchObservedRunningTime="2026-01-30 13:27:10.664512651 +0000 UTC m=+1395.325193878" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.994978 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137400 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137486 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137585 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137639 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137663 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137726 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.138250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.138589 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.160221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts" (OuterVolumeSpecName: "scripts") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.176240 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4" (OuterVolumeSpecName: "kube-api-access-v7jg4") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "kube-api-access-v7jg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.236221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241236 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241266 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241276 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241284 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241292 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.321297 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.330406 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.344160 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.344195 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.371147 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data" (OuterVolumeSpecName: "config-data") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.446108 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.634520 5039 generic.go:334] "Generic (PLEG): container finished" podID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" exitCode=143 Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.634639 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637819 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" exitCode=0 Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637902 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637972 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"6614bbaf0c08cdbd12c87d26109fdd7fc2758ee316f7840dad0ab9d434c19a76"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.638026 5039 scope.go:117] "RemoveContainer" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.669547 5039 scope.go:117] "RemoveContainer" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.670675 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.687638 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.699696 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700167 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700180 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700206 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700336 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700345 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700363 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700369 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700531 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700539 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700545 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700564 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.702148 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706070 5039 scope.go:117] "RemoveContainer" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706516 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706623 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.715451 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.769159 5039 scope.go:117] "RemoveContainer" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786143 5039 scope.go:117] "RemoveContainer" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.786824 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": container with ID starting with c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb not found: ID does not exist" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786850 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} err="failed to get container status \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": rpc error: code = NotFound desc = could not find container \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": container with ID starting with c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786871 5039 scope.go:117] "RemoveContainer" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787195 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": container with ID starting with 7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83 not found: ID does not exist" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787211 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} err="failed to get container status \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": rpc error: code = NotFound desc = could not find container \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": container with ID starting with 7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787223 5039 scope.go:117] "RemoveContainer" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787523 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": container with ID starting with 3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303 not found: ID does not exist" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787537 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} err="failed to get container status \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": rpc error: code = NotFound desc = could not find container \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": container with ID starting with 3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787547 5039 scope.go:117] "RemoveContainer" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787788 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": container with ID starting with 30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938 not found: ID does not exist" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787803 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} err="failed to get container status \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": rpc error: code = NotFound desc = could not find container \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": container with ID starting with 30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852793 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852818 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852878 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853187 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853356 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.954998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955090 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955125 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955160 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955204 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955447 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955731 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.958944 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.959596 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.959774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.960526 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.964652 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.972479 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.066795 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.104886 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" path="/var/lib/kubelet/pods/778f1624-3c0b-49a5-b123-c7c38af92ba8/volumes" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.375877 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.533499 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:12 crc kubenswrapper[5039]: W0130 13:27:12.540275 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d219304_2fcc_48f8_ba20_b0fbf12a4e84.slice/crio-6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb WatchSource:0}: Error finding container 6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb: Status 404 returned error can't find the container with id 6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.649498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb"} Jan 30 13:27:13 crc kubenswrapper[5039]: I0130 13:27:13.659370 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:13 crc kubenswrapper[5039]: I0130 13:27:13.664081 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} Jan 30 13:27:14 crc kubenswrapper[5039]: I0130 13:27:14.690707 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} Jan 30 13:27:14 crc kubenswrapper[5039]: I0130 13:27:14.691055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.584483 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.703867 5039 generic.go:334] "Generic (PLEG): container finished" podID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" exitCode=0 Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704681 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"bf1f32b5656cbd0ec0a02e133a8fd538c702e03de684cfb3027704d645025a94"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704928 5039 scope.go:117] "RemoveContainer" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.705301 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.731950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732004 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732145 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732230 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.733518 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs" (OuterVolumeSpecName: "logs") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.738147 5039 scope.go:117] "RemoveContainer" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.742467 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4" (OuterVolumeSpecName: "kube-api-access-jbrm4") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "kube-api-access-jbrm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.768354 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data" (OuterVolumeSpecName: "config-data") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.773384 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834400 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834715 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834726 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834738 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.843637 5039 scope.go:117] "RemoveContainer" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: E0130 13:27:15.844211 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": container with ID starting with f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12 not found: ID does not exist" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.844263 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} err="failed to get container status \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": rpc error: code = NotFound desc = could not find container \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": container with ID starting with f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12 not found: ID does not exist" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.844290 5039 scope.go:117] "RemoveContainer" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: E0130 13:27:15.845132 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": container with ID starting with cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355 not found: ID does not exist" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.845194 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} err="failed to get container status \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": rpc error: code = NotFound desc = could not find container \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": container with ID starting with cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355 not found: ID does not exist" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.037733 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.045811 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.064765 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: E0130 13:27:16.065256 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065271 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: E0130 13:27:16.065288 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065294 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065479 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065504 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.066511 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.071791 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.072071 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.072734 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.076469 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.105585 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" path="/var/lib/kubelet/pods/af70fa58-fb1f-48bd-8d6c-87a63f461dae/volumes" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139597 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139683 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139774 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139928 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139992 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241281 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241390 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241462 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241596 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.247249 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.248741 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.248770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.255693 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.260934 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.401098 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.867679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.722889 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.724120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.724293 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.725915 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726113 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" containerID="cri-o://6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726204 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726222 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" containerID="cri-o://dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726235 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" containerID="cri-o://dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726368 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" containerID="cri-o://a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.770485 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.770470084 podStartE2EDuration="1.770470084s" podCreationTimestamp="2026-01-30 13:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:17.755770926 +0000 UTC m=+1402.416452243" watchObservedRunningTime="2026-01-30 13:27:17.770470084 +0000 UTC m=+1402.431151311" Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.798139 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.160232236 podStartE2EDuration="6.798122753s" podCreationTimestamp="2026-01-30 13:27:11 +0000 UTC" firstStartedPulling="2026-01-30 13:27:12.543087572 +0000 UTC m=+1397.203768799" lastFinishedPulling="2026-01-30 13:27:17.180978069 +0000 UTC m=+1401.841659316" observedRunningTime="2026-01-30 13:27:17.791626792 +0000 UTC m=+1402.452308079" watchObservedRunningTime="2026-01-30 13:27:17.798122753 +0000 UTC m=+1402.458803980" Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.156851 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d219304_2fcc_48f8_ba20_b0fbf12a4e84.slice/crio-dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.209222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.280745 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.281880 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" containerID="cri-o://9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" gracePeriod=10 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.659348 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.677297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.736423 5039 generic.go:334] "Generic (PLEG): container finished" podID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerID="9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.736480 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739280 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739307 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" exitCode=2 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739314 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739357 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.755430 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.942922 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.961659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.962416 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="init" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962434 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="init" Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.962464 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962473 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962684 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.963325 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.965892 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.965903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.976482 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.989960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990162 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990409 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990476 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990700 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.996325 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp" (OuterVolumeSpecName: "kube-api-access-qj8qp") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "kube-api-access-qj8qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.057605 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.074266 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.083586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config" (OuterVolumeSpecName: "config") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.090635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.094919 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095202 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095214 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095222 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095233 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095242 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.099257 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.100284 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.103183 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.105166 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.110489 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.142452 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196808 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196862 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197355 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197528 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197543 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.201483 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs" (OuterVolumeSpecName: "kube-api-access-sjtbs") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "kube-api-access-sjtbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.202064 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts" (OuterVolumeSpecName: "scripts") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.222483 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.246977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.281517 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.283579 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299794 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299830 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299839 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299848 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299857 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299865 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.306943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data" (OuterVolumeSpecName: "config-data") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.403105 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.704755 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:19 crc kubenswrapper[5039]: W0130 13:27:19.708924 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod916b8cef_080b_4ec9_98c6_ce13bfdcdd20.slice/crio-d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e WatchSource:0}: Error finding container d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e: Status 404 returned error can't find the container with id d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.751228 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerStarted","Data":"d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756037 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" exitCode=0 Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756102 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756142 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756142 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756164 5039 scope.go:117] "RemoveContainer" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.760436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"e377439dbc21dc2a1a80acc7def57d1cdb0245ec6918d6164a209411bf3828b9"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.760674 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.886287 5039 scope.go:117] "RemoveContainer" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.908981 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.924668 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.929387 5039 scope.go:117] "RemoveContainer" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.943437 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.956060 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.965700 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966226 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966247 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966268 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966278 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966299 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966309 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966329 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966337 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966603 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966626 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966648 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966661 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.970097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.974373 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.975619 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.975767 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.985068 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.993879 5039 scope.go:117] "RemoveContainer" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020207 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020264 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020320 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020352 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.021032 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.021054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.022466 5039 scope.go:117] "RemoveContainer" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.023367 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": container with ID starting with dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66 not found: ID does not exist" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023407 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} err="failed to get container status \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": rpc error: code = NotFound desc = could not find container \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": container with ID starting with dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66 not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023432 5039 scope.go:117] "RemoveContainer" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.023777 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": container with ID starting with a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c not found: ID does not exist" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023825 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} err="failed to get container status \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": rpc error: code = NotFound desc = could not find container \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": container with ID starting with a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023861 5039 scope.go:117] "RemoveContainer" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.024302 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": container with ID starting with dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf not found: ID does not exist" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024332 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} err="failed to get container status \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": rpc error: code = NotFound desc = could not find container \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": container with ID starting with dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024352 5039 scope.go:117] "RemoveContainer" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.024637 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": container with ID starting with 6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d not found: ID does not exist" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024688 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} err="failed to get container status \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": rpc error: code = NotFound desc = could not find container \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": container with ID starting with 6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024707 5039 scope.go:117] "RemoveContainer" containerID="9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.044584 5039 scope.go:117] "RemoveContainer" containerID="ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.106928 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" path="/var/lib/kubelet/pods/64ef9901-545b-40a6-84b0-cb1547ff069e/volumes" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.107539 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" path="/var/lib/kubelet/pods/9d219304-2fcc-48f8-ba20-b0fbf12a4e84/volumes" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122814 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122856 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122919 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.123101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.123153 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.124274 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.124510 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.127493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.127938 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.128164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.135505 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.139280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.141164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.291219 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.779173 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerStarted","Data":"2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea"} Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.805709 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.810406 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sngvh" podStartSLOduration=2.8103874810000002 podStartE2EDuration="2.810387481s" podCreationTimestamp="2026-01-30 13:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:20.795601441 +0000 UTC m=+1405.456282688" watchObservedRunningTime="2026-01-30 13:27:20.810387481 +0000 UTC m=+1405.471068708" Jan 30 13:27:20 crc kubenswrapper[5039]: W0130 13:27:20.835651 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6644cf_01f6_44cf_95d6_3626f4fa57da.slice/crio-1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d WatchSource:0}: Error finding container 1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d: Status 404 returned error can't find the container with id 1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d Jan 30 13:27:21 crc kubenswrapper[5039]: I0130 13:27:21.799903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453"} Jan 30 13:27:21 crc kubenswrapper[5039]: I0130 13:27:21.801008 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d"} Jan 30 13:27:22 crc kubenswrapper[5039]: I0130 13:27:22.810700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7"} Jan 30 13:27:23 crc kubenswrapper[5039]: I0130 13:27:23.817444 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 30 13:27:23 crc kubenswrapper[5039]: I0130 13:27:23.823279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e"} Jan 30 13:27:24 crc kubenswrapper[5039]: I0130 13:27:24.832327 5039 generic.go:334] "Generic (PLEG): container finished" podID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerID="2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea" exitCode=0 Jan 30 13:27:24 crc kubenswrapper[5039]: I0130 13:27:24.832377 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerDied","Data":"2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.238498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353335 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353398 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353431 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.359276 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d" (OuterVolumeSpecName: "kube-api-access-cp59d") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "kube-api-access-cp59d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.360182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts" (OuterVolumeSpecName: "scripts") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.387149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data" (OuterVolumeSpecName: "config-data") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.406578 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.406989 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.411067 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457266 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457306 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457315 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457324 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.852933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerDied","Data":"d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.852974 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.853118 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.864495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.864865 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.915632 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.120245784 podStartE2EDuration="7.915601203s" podCreationTimestamp="2026-01-30 13:27:19 +0000 UTC" firstStartedPulling="2026-01-30 13:27:20.839585961 +0000 UTC m=+1405.500267188" lastFinishedPulling="2026-01-30 13:27:25.63494138 +0000 UTC m=+1410.295622607" observedRunningTime="2026-01-30 13:27:26.893210772 +0000 UTC m=+1411.553892069" watchObservedRunningTime="2026-01-30 13:27:26.915601203 +0000 UTC m=+1411.576282470" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.027891 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.028265 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" containerID="cri-o://77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039281 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039583 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" containerID="cri-o://890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039671 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" containerID="cri-o://46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.052645 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.052651 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080271 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080588 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" containerID="cri-o://bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080755 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" containerID="cri-o://8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.877053 5039 generic.go:334] "Generic (PLEG): container finished" podID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" exitCode=143 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.877361 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.880649 5039 generic.go:334] "Generic (PLEG): container finished" podID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerID="890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" exitCode=143 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.880748 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896098 5039 generic.go:334] "Generic (PLEG): container finished" podID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerID="77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" exitCode=0 Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerDied","Data":"77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerDied","Data":"5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896503 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44" Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.955607 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034481 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034586 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.055859 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d" (OuterVolumeSpecName: "kube-api-access-x2m2d") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "kube-api-access-x2m2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.060254 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.082465 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data" (OuterVolumeSpecName: "config-data") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136757 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136797 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136806 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.906244 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.943561 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.955265 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974060 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: E0130 13:27:29.974447 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: E0130 13:27:29.974488 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974496 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974669 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974697 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.975480 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.978415 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.990799 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054389 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054858 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.111895 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" path="/var/lib/kubelet/pods/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d/volumes" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158287 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.163964 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.164123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.184612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.290836 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.679087 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772746 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773000 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773039 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773899 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs" (OuterVolumeSpecName: "logs") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.778030 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f" (OuterVolumeSpecName: "kube-api-access-9qf8f") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "kube-api-access-9qf8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.805690 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.805833 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data" (OuterVolumeSpecName: "config-data") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.846072 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874595 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874626 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874636 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874646 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874656 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: W0130 13:27:30.877662 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod266dbee0_3c74_4820_8165_1955c6ca832a.slice/crio-4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9 WatchSource:0}: Error finding container 4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9: Status 404 returned error can't find the container with id 4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9 Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.878023 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.918967 5039 generic.go:334] "Generic (PLEG): container finished" podID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" exitCode=0 Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919071 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919050 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919512 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919537 5039 scope.go:117] "RemoveContainer" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.921353 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerStarted","Data":"4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.958256 5039 scope.go:117] "RemoveContainer" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.966966 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.986617 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.991325 5039 scope.go:117] "RemoveContainer" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.994805 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": container with ID starting with 8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17 not found: ID does not exist" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.994856 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} err="failed to get container status \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": rpc error: code = NotFound desc = could not find container \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": container with ID starting with 8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17 not found: ID does not exist" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.994883 5039 scope.go:117] "RemoveContainer" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.995408 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": container with ID starting with bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638 not found: ID does not exist" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.995451 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} err="failed to get container status \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": rpc error: code = NotFound desc = could not find container \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": container with ID starting with bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638 not found: ID does not exist" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.997464 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.997993 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998031 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.998073 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998081 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998348 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998373 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.999677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.001216 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.002387 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.009334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083586 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.084001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.084210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186439 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186519 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186551 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186613 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.187791 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.192504 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.194068 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.201715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.204743 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.334926 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.875449 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:31 crc kubenswrapper[5039]: W0130 13:27:31.880279 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ea6fff_3bc2_4830_b1f5_53d20cd2a801.slice/crio-5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768 WatchSource:0}: Error finding container 5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768: Status 404 returned error can't find the container with id 5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768 Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.933106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768"} Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.936754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerStarted","Data":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.963529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.963506643 podStartE2EDuration="2.963506643s" podCreationTimestamp="2026-01-30 13:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:31.957891655 +0000 UTC m=+1416.618572922" watchObservedRunningTime="2026-01-30 13:27:31.963506643 +0000 UTC m=+1416.624187880" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.121126 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" path="/var/lib/kubelet/pods/4fb54f17-1620-4d7f-9fef-b9be9740a158/volumes" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.947988 5039 generic.go:334] "Generic (PLEG): container finished" podID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerID="46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" exitCode=0 Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948181 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948828 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948850 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.950805 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.950844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.977025 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.976987369 podStartE2EDuration="2.976987369s" podCreationTimestamp="2026-01-30 13:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:32.974947865 +0000 UTC m=+1417.635629112" watchObservedRunningTime="2026-01-30 13:27:32.976987369 +0000 UTC m=+1417.637668596" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.069620 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146927 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146949 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146980 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.147024 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.149051 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs" (OuterVolumeSpecName: "logs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.154565 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b" (OuterVolumeSpecName: "kube-api-access-dgp4b") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "kube-api-access-dgp4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.173051 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data" (OuterVolumeSpecName: "config-data") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.173343 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.205303 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.213144 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249254 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249292 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249306 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249317 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249328 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249338 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.965145 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.020191 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.030227 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.061335 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: E0130 13:27:34.062193 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062239 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: E0130 13:27:34.062289 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062307 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062777 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062847 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.065181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.070251 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.073245 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.075099 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.082525 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.113184 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" path="/var/lib/kubelet/pods/8b9b2b78-5b27-4544-9c74-990d418894c8/volumes" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167961 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168198 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168296 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270144 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270340 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.272001 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.276313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.280936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.283923 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.285395 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.292877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.400193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.937315 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.975981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4"} Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.291592 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.582195 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.582322 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": dial tcp 10.217.0.192:8775: i/o timeout" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.995712 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5"} Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.995792 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31"} Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.041082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.041053202 podStartE2EDuration="2.041053202s" podCreationTimestamp="2026-01-30 13:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:36.027378631 +0000 UTC m=+1420.688059928" watchObservedRunningTime="2026-01-30 13:27:36.041053202 +0000 UTC m=+1420.701734439" Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.335795 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.335902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:27:37 crc kubenswrapper[5039]: I0130 13:27:37.742629 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:27:37 crc kubenswrapper[5039]: I0130 13:27:37.742964 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:27:40 crc kubenswrapper[5039]: I0130 13:27:40.291830 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:27:40 crc kubenswrapper[5039]: I0130 13:27:40.327933 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.102826 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.335470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.335531 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:27:42 crc kubenswrapper[5039]: I0130 13:27:42.353251 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:42 crc kubenswrapper[5039]: I0130 13:27:42.353252 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:44 crc kubenswrapper[5039]: I0130 13:27:44.401129 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:44 crc kubenswrapper[5039]: I0130 13:27:44.401374 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:45 crc kubenswrapper[5039]: I0130 13:27:45.416185 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:45 crc kubenswrapper[5039]: I0130 13:27:45.416186 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:50 crc kubenswrapper[5039]: I0130 13:27:50.309562 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.312551 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.318138 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.335956 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.345166 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.346470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.354833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428685 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531983 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.552129 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.645294 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.131438 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.186592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"04e17ffc019138be17500261beb1e8e91ab8a584a535c22c57cb0fca04b081b0"} Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.191389 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.204160 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" exitCode=0 Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.204295 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359"} Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.208832 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.213550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.435372 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.436089 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.440628 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.448767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:55 crc kubenswrapper[5039]: I0130 13:27:55.223090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:55 crc kubenswrapper[5039]: I0130 13:27:55.233860 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:56 crc kubenswrapper[5039]: I0130 13:27:56.241373 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" exitCode=0 Jan 30 13:27:56 crc kubenswrapper[5039]: I0130 13:27:56.241522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} Jan 30 13:27:58 crc kubenswrapper[5039]: I0130 13:27:58.267204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} Jan 30 13:27:58 crc kubenswrapper[5039]: I0130 13:27:58.309815 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r4p7m" podStartSLOduration=2.943847907 podStartE2EDuration="7.309788756s" podCreationTimestamp="2026-01-30 13:27:51 +0000 UTC" firstStartedPulling="2026-01-30 13:27:53.208322284 +0000 UTC m=+1437.869003541" lastFinishedPulling="2026-01-30 13:27:57.574263143 +0000 UTC m=+1442.234944390" observedRunningTime="2026-01-30 13:27:58.297590609 +0000 UTC m=+1442.958271876" watchObservedRunningTime="2026-01-30 13:27:58.309788756 +0000 UTC m=+1442.970470023" Jan 30 13:28:01 crc kubenswrapper[5039]: I0130 13:28:01.646625 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:01 crc kubenswrapper[5039]: I0130 13:28:01.646927 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:02 crc kubenswrapper[5039]: I0130 13:28:02.711093 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r4p7m" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" probeResult="failure" output=< Jan 30 13:28:02 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:28:02 crc kubenswrapper[5039]: > Jan 30 13:28:07 crc kubenswrapper[5039]: I0130 13:28:07.742455 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:28:07 crc kubenswrapper[5039]: I0130 13:28:07.743126 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.881291 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.882980 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.891808 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.933700 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.054100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.054165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.063453 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.063732 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" containerID="cri-o://116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" gracePeriod=2 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.081494 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.096393 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.096641 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" containerID="cri-o://257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.097059 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" containerID="cri-o://4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.120853 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.148393 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.158873 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.158929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.159870 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.160232 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160255 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160459 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.160990 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.164482 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.164538 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:11.664521585 +0000 UTC m=+1456.325202812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.168266 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.185073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.186751 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.267211 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268362 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268377 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.271130 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.271310 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346049 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346303 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" containerID="cri-o://cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346689 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" containerID="cri-o://46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.356463 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371398 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371450 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371502 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371521 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371561 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371580 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371628 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371724 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.383100 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.415927 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.445505 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474537 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474588 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474672 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474740 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475510 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475548 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475592 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.476631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.478959 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.483701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.485166 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.491556 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.491718 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505424 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505745 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505826 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.514311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.517778 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.530461 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.539267 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.540579 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.552851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.560678 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.562378 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.562608 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.577352 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.577395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.587553 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.602471 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.607159 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.608775 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.634685 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.636267 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.660141 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.661288 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.664100 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.678939 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680135 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680193 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680240 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680301 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680321 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680340 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680393 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.680959 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.680987 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.681043 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:12.68102505 +0000 UTC m=+1457.341706267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.681118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.683529 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.698477 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.699648 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.737644 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.737928 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.742361 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.743817 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.763691 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.773404 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.776513 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.779289 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781672 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781776 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781821 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781869 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781925 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781982 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.848787 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.850183 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.850273 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:12.350245093 +0000 UTC m=+1457.010926320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.850760 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.851165 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" containerID="cri-o://4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" gracePeriod=300 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.853852 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.860860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885449 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.892948 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.929692 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.929973 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.930732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.931226 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.931654 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.941686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.962208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.971076 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.971829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990665 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.991653 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.017475 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.046355 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.080285 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.087811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.110903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.175371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.233474 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.259906 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" path="/var/lib/kubelet/pods/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.260609 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" path="/var/lib/kubelet/pods/33a20c1e-b7d7-4f94-b313-58229c1c9d4e/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.261160 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" path="/var/lib/kubelet/pods/55556e4d-2818-46de-b888-7a5be04f2a5c/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.261910 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" path="/var/lib/kubelet/pods/c0a3587a-d7dd-4007-aff8-acfcd399496f/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.265063 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277925 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277961 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277975 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.278080 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.285154 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.301388 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.309472 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.374752 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.410572 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.411910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.411998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.412269 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.434289 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:13.434261537 +0000 UTC m=+1458.094942764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.432666 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.528715 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.552041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.552134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.553202 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.564659 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.596589 5039 generic.go:334] "Generic (PLEG): container finished" podID="c29afae4-9445-4472-b93b-5a111a886b9a" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" exitCode=143 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.596751 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.642399 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerID="4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" exitCode=2 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.643004 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808"} Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.648423 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.648717 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" containerID="cri-o://2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.649053 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" containerID="cri-o://10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.656596 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.657054 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.678205 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.690602 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.703254 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.719843 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.731526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.745263 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762204 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762233 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762245 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:14.762267353 +0000 UTC m=+1459.422948580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.770680 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.782751 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.808406 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.817678 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.825851 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.826094 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-t7hh5" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" containerID="cri-o://c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.838737 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.851886 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.852287 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" containerID="cri-o://cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" gracePeriod=300 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.855908 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" containerID="cri-o://b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" gracePeriod=299 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.877114 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.888899 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.961785 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.974145 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.985434 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.995937 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.043115 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.052077 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" containerID="cri-o://4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" gracePeriod=300 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.077533 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.078162 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f47564b6-tbx7d" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" containerID="cri-o://704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.080040 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f47564b6-tbx7d" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" containerID="cri-o://1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.108181 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.132215 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.144108 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.153806 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.154131 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" containerID="cri-o://73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" gracePeriod=10 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.185148 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.185901 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" containerID="cri-o://ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186324 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" containerID="cri-o://5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186368 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" containerID="cri-o://154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186396 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" containerID="cri-o://b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186396 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" containerID="cri-o://5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186436 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" containerID="cri-o://f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186475 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" containerID="cri-o://15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186334 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" containerID="cri-o://eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186528 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" containerID="cri-o://ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186540 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" containerID="cri-o://a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186557 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" containerID="cri-o://b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186570 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" containerID="cri-o://29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186581 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" containerID="cri-o://488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186530 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" containerID="cri-o://4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186624 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" containerID="cri-o://fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.200705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.200942 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" containerID="cri-o://9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.201349 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" containerID="cri-o://a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.202834 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.208609 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.264103 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.276530 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.287382 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.287478 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:13.787459891 +0000 UTC m=+1458.448141108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.298896 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.299181 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" containerID="cri-o://8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.299683 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" containerID="cri-o://c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.341722 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.342024 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" containerID="cri-o://25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.342157 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" containerID="cri-o://74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.369498 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.384136 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.397721 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.407082 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.426967 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445078 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445363 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445583 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" containerID="cri-o://3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.446038 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" containerID="cri-o://ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.462267 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.470712 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.478742 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.487920 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.502390 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.502465 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:15.50244748 +0000 UTC m=+1460.163128707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.517586 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.530127 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.548939 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.549269 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" containerID="cri-o://d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.549907 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" containerID="cri-o://5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.556059 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.571802 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.593323 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.627122 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.640661 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.650875 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.651085 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" containerID="cri-o://bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.651437 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" containerID="cri-o://b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.664239 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.674196 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.751944 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" containerID="cri-o://7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" gracePeriod=604800 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.776102 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.796636 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.796965 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerID="b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.797091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.819124 5039 generic.go:334] "Generic (PLEG): container finished" podID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerID="8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.819189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.832947 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" containerID="cri-o://d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.834935 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.842440 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:14.842412537 +0000 UTC m=+1459.503093764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.836532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.842485 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.842673 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7df987bf59-vgqrf" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" containerID="cri-o://999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.836475 5039 generic.go:334] "Generic (PLEG): container finished" podID="75292c04-e484-4def-a16f-2d703409e49e" containerID="25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.843101 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7df987bf59-vgqrf" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" containerID="cri-o://b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.847716 5039 generic.go:334] "Generic (PLEG): container finished" podID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerID="4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.847790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.849189 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f702130-7802-4f11-96ff-b51a7edf7740" containerID="73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.849230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853139 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853168 5039 generic.go:334] "Generic (PLEG): container finished" podID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerID="c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853235 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853251 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerDied","Data":"c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.859686 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.859950 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.863533 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.863631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.865965 5039 generic.go:334] "Generic (PLEG): container finished" podID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerID="3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.866022 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.869708 5039 generic.go:334] "Generic (PLEG): container finished" podID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerID="116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" exitCode=137 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.873476 5039 generic.go:334] "Generic (PLEG): container finished" podID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerID="704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.873509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.874690 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.874991 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" containerID="cri-o://20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.875689 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" containerID="cri-o://e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.883601 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888247 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888289 5039 generic.go:334] "Generic (PLEG): container finished" podID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerID="cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888302 5039 generic.go:334] "Generic (PLEG): container finished" podID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerID="4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888391 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.891743 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" containerID="cri-o://664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" gracePeriod=29 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.907452 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.910254 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.910324 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.914610 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.927863 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.936653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.936698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.937270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938111 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938188 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938252 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938301 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938350 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938395 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938440 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938489 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938539 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938586 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938632 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938873 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r4p7m" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" containerID="cri-o://46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" gracePeriod=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938983 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939704 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939833 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.940254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.940312 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config" (OuterVolumeSpecName: "config") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947671 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947764 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947824 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947882 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.949081 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.949096 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.951401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.976175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b" (OuterVolumeSpecName: "kube-api-access-5cj2b") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "kube-api-access-5cj2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.029334 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.043908 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" containerID="cri-o://3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" gracePeriod=604800 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051590 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051697 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051765 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052232 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052243 5039 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052253 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.067200 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw" (OuterVolumeSpecName: "kube-api-access-h4rnw") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "kube-api-access-h4rnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.090245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.100983 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.163:9696/\": dial tcp 10.217.0.163:9696: connect: connection refused" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.109417 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.109522 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.152289 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" path="/var/lib/kubelet/pods/1c26816b-0634-4cb2-9356-3affc33c0698/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.160316 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" path="/var/lib/kubelet/pods/20bee34b-7616-41d8-8761-12c09c8523e3/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.160867 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" path="/var/lib/kubelet/pods/21db3ccc-3757-44b9-9f63-835f790c4321/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161481 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326188c4-7523-49b7-9790-063f3f18988d" path="/var/lib/kubelet/pods/326188c4-7523-49b7-9790-063f3f18988d/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161855 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161895 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161934 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.162300 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.162311 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.164490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config" (OuterVolumeSpecName: "config") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.165112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts" (OuterVolumeSpecName: "scripts") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.172606 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33369def-50c6-4216-953b-e1848ff3a90a" path="/var/lib/kubelet/pods/33369def-50c6-4216-953b-e1848ff3a90a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.173144 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" path="/var/lib/kubelet/pods/34b4ac27-da03-43e8-874d-7feb1000f162/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.173654 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" path="/var/lib/kubelet/pods/3cb443d1-8938-47af-ab3b-1912d9e72f4f/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.195358 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.196310 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" path="/var/lib/kubelet/pods/4268e11c-c142-453b-a3c1-15696f9b21e5/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.196852 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" path="/var/lib/kubelet/pods/45c105ac-a6f3-40f4-8543-3d8fe84f6132/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.211528 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" path="/var/lib/kubelet/pods/5bba3dea-64f4-479f-b7f1-99c718d7b8af/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.220624 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.221087 5039 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-z6nkm" message=< Jan 30 13:28:14 crc kubenswrapper[5039]: Exiting ovsdb-server (5) [ OK ] Jan 30 13:28:14 crc kubenswrapper[5039]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.221120 5039 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" containerID="cri-o://1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221151 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" containerID="cri-o://1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" gracePeriod=29 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221482 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221538 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.226592 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.229442 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" path="/var/lib/kubelet/pods/60e67b31-eb88-4ca5-a4b8-960fe900d68a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.229954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.230261 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" path="/var/lib/kubelet/pods/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.239679 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" path="/var/lib/kubelet/pods/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.240780 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" path="/var/lib/kubelet/pods/7bd23757-95cb-4596-a9ff-f448576ffd8e/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.241326 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" path="/var/lib/kubelet/pods/916b8cef-080b-4ec9-98c6-ce13bfdcdd20/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.247576 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" path="/var/lib/kubelet/pods/91bf7602-3edd-424d-a6a0-a5a1097fd3ba/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.248373 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" path="/var/lib/kubelet/pods/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.249102 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" path="/var/lib/kubelet/pods/c63ad167-cbf8-4da9-83c2-0c66566d7105/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.250312 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" path="/var/lib/kubelet/pods/c7db6f42-583a-450d-b142-ec7c5ae4eee0/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.251468 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" path="/var/lib/kubelet/pods/cde91080-bc38-44b5-986f-6712c73de0ec/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.251982 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" path="/var/lib/kubelet/pods/f73f9b07-439c-418f-a04a-bc0aae17e21a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.252840 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264351 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264391 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264425 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264448 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264480 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264899 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264910 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264918 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264927 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264935 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264946 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.265237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.268083 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.268339 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" containerID="cri-o://c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.271748 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.278227 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr" (OuterVolumeSpecName: "kube-api-access-kb5mr") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "kube-api-access-kb5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.281518 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.291143 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.305307 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.316046 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.316291 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" containerID="cri-o://15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.323269 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.323442 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" containerID="cri-o://edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.345992 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.353752 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: if [ -n "placement" ]; then Jan 30 13:28:14 crc kubenswrapper[5039]: GRANT_DATABASE="placement" Jan 30 13:28:14 crc kubenswrapper[5039]: else Jan 30 13:28:14 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:14 crc kubenswrapper[5039]: fi Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:14 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:14 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:14 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:14 crc kubenswrapper[5039]: # support updates Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.356704 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-5666-account-create-update-zr44j" podUID="9c8f6794-a2c1-4d54-a048-71db0a14213e" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366096 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366588 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367191 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367422 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367478 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367581 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367966 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367983 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367998 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.368017 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.369342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config" (OuterVolumeSpecName: "config") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.369358 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts" (OuterVolumeSpecName: "scripts") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.380673 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.393408 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78" (OuterVolumeSpecName: "kube-api-access-v6g78") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "kube-api-access-v6g78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.394816 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.395956 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.424211 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471302 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471334 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471364 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471374 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471383 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471392 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471400 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.474302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.489470 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.501186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.518202 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.572964 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573029 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573046 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573058 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.646173 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778235 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778344 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778460 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778532 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778633 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779216 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779233 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779244 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779288 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.779272253 +0000 UTC m=+1463.439953480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.799382 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7" (OuterVolumeSpecName: "kube-api-access-cjxv7") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "kube-api-access-cjxv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.820316 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.855775 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.858717 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.858790 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.892900 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.893485 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:16.893462912 +0000 UTC m=+1461.554144139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.899789 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.901534 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.919803 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config" (OuterVolumeSpecName: "config") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.932296 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.939970 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.947730 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.987044 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.993388 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994613 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225"} Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994702 5039 scope.go:117] "RemoveContainer" containerID="cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994842 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.995890 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001310 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001537 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001756 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001827 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.002485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001830 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003689 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003710 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003724 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003738 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003749 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.012155 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz" (OuterVolumeSpecName: "kube-api-access-x8glz") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "kube-api-access-x8glz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.019029 5039 generic.go:334] "Generic (PLEG): container finished" podID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerID="d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.019120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.041639 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.041767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.042092 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerDied","Data":"009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.054579 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.063813 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data" (OuterVolumeSpecName: "config-data") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084182 5039 generic.go:334] "Generic (PLEG): container finished" podID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084219 5039 generic.go:334] "Generic (PLEG): container finished" podID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084337 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084809 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084829 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.091302 5039 scope.go:117] "RemoveContainer" containerID="4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105088 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105129 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105559 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105582 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105642 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105943 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105977 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.108445 5039 generic.go:334] "Generic (PLEG): container finished" podID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerID="20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.108540 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.109109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159295 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities" (OuterVolumeSpecName: "utilities") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159734 5039 scope.go:117] "RemoveContainer" containerID="73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.167335 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs" (OuterVolumeSpecName: "logs") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174905 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174762 5039 generic.go:334] "Generic (PLEG): container finished" podID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177220 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerDied","Data":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177248 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerDied","Data":"c8546343d44020f12aa855ac05ab8a9543bb3d9f88991b1f497d0bbf8b9309dc"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177720 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x" (OuterVolumeSpecName: "kube-api-access-2885x") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "kube-api-access-2885x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.178499 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7" (OuterVolumeSpecName: "kube-api-access-cqrc7") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "kube-api-access-cqrc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.183110 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.194825 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.194997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.195653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.195689 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"04e17ffc019138be17500261beb1e8e91ab8a584a535c22c57cb0fca04b081b0"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.203113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-zr44j" event={"ID":"9c8f6794-a2c1-4d54-a048-71db0a14213e","Type":"ContainerStarted","Data":"51f62d64c11b2f8e97e81e05d2c7367910468d8f8b8206ae9ad4cf991e1bb34e"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.211725 5039 scope.go:117] "RemoveContainer" containerID="5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213506 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213705 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213755 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213805 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.214069 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.214149 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.215033 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.217928 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.218907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219176 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219258 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219307 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221478 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-757b86cf47-brmgg" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" containerID="cri-o://84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" gracePeriod=30 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221602 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-757b86cf47-brmgg" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" containerID="cri-o://094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" gracePeriod=30 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221791 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerID="a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221876 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.223520 5039 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228778 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228802 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228812 5039 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228827 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228836 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228846 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228856 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.234169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.236271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9" (OuterVolumeSpecName: "kube-api-access-n8lh9") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "kube-api-access-n8lh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.261458 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.283377 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.283791 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.289582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerDied","Data":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.292208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.292515 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.300029 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.305102 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.305690 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.305913 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.306377 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data" (OuterVolumeSpecName: "config-data") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.306495 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.306560 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.315658 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342230 5039 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342256 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342265 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342273 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342282 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342303 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342313 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342321 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342329 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342338 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.345260 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347043 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347116 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"414bac68c45351f838e0a511be6c7599d1e6e148cb6534c66df26f8dabdc82e1"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347200 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.353227 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.353252 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.162:8776/healthcheck\": read tcp 10.217.0.2:43680->10.217.0.162:8776: read: connection reset by peer" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.354639 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.358812 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.371302 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.382864 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.382957 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383023 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383181 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383314 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.406501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.411277 5039 generic.go:334] "Generic (PLEG): container finished" podID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.411320 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} Jan 30 13:28:15 crc kubenswrapper[5039]: W0130 13:28:15.429089 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c58c2f_0d3f_4008_8fdd_fcc50307cc31.slice/crio-bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59 WatchSource:0}: Error finding container bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59: Status 404 returned error can't find the container with id bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.434977 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.437435 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.445156 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.456407 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.458940 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463225 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "glance" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="glance" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463249 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "neutron" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="neutron" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463623 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "nova_api" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="nova_api" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464296 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-286b-account-create-update-dm7tt" podUID="71c58c2f-0d3f-4008-8fdd-fcc50307cc31" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464313 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-fae2-account-create-update-hhbtz" podUID="a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464842 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-4e5c-account-create-update-q94vs" podUID="f26bcd91-af44-4f1f-afca-6db6c3fe5362" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.469583 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "barbican" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="barbican" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.472996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-6646-account-create-update-rjc76" podUID="860591fe-67b6-4a2e-b8f1-29556c8ef320" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.493273 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.521516 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.549234 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.549281 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.549679 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:19.549654118 +0000 UTC m=+1464.210335365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.574533 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.605580 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.621970 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.628750 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.635719 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.665530 5039 scope.go:117] "RemoveContainer" containerID="c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.823091 5039 scope.go:117] "RemoveContainer" containerID="116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.992574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.008060 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.030096 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.044854 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.053205 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.065266 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.074369 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"9c8f6794-a2c1-4d54-a048-71db0a14213e\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.075557 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"9c8f6794-a2c1-4d54-a048-71db0a14213e\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.078892 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c8f6794-a2c1-4d54-a048-71db0a14213e" (UID: "9c8f6794-a2c1-4d54-a048-71db0a14213e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.085058 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.092048 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg" (OuterVolumeSpecName: "kube-api-access-dfpxg") pod "9c8f6794-a2c1-4d54-a048-71db0a14213e" (UID: "9c8f6794-a2c1-4d54-a048-71db0a14213e"). InnerVolumeSpecName "kube-api-access-dfpxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.120329 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.161999 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.162567 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.162605 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} err="failed to get container status \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.162628 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.163314 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163332 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} err="failed to get container status \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163345 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.163340 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:16 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: if [ -n "cinder" ]; then Jan 30 13:28:16 crc kubenswrapper[5039]: GRANT_DATABASE="cinder" Jan 30 13:28:16 crc kubenswrapper[5039]: else Jan 30 13:28:16 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:16 crc kubenswrapper[5039]: fi Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:16 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:16 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:16 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:16 crc kubenswrapper[5039]: # support updates Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163726 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} err="failed to get container status \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163773 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.164400 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} err="failed to get container status \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.164417 5039 scope.go:117] "RemoveContainer" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.164524 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-0596-account-create-update-2qxp2" podUID="bc51df5b-e54d-457e-af37-671db12ee0bd" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.165346 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" path="/var/lib/kubelet/pods/2081f65c-c5b5-4486-bdb3-49acf4f9ae46/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.166383 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" path="/var/lib/kubelet/pods/268ed38d-d02d-4539-be5c-f461fde5d02b/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.166928 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" path="/var/lib/kubelet/pods/3f702130-7802-4f11-96ff-b51a7edf7740/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.168420 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" path="/var/lib/kubelet/pods/5b85bd45-6f76-4ac8-8df6-cdbb93636b44/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.169564 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" path="/var/lib/kubelet/pods/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.170439 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" path="/var/lib/kubelet/pods/a4f02ddf-62c8-49b8-8e86-d6b87c61172b/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.171891 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" path="/var/lib/kubelet/pods/b33729af-9ada-4dd3-bc99-4444fbe1b3d8/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.172866 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" path="/var/lib/kubelet/pods/f66d95ec-ff37-4cc2-a076-e53cc7713582/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180218 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180447 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180515 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180679 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180769 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180880 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180956 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180829 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181384 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181850 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181941 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182112 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182242 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182381 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.183245 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.185163 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.186239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88" (OuterVolumeSpecName: "kube-api-access-ptj88") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "kube-api-access-ptj88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.186300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.198768 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs" (OuterVolumeSpecName: "logs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.204834 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts" (OuterVolumeSpecName: "scripts") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.207111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.210930 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.213813 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214547 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214772 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" containerID="cri-o://031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214870 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" containerID="cri-o://a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" containerID="cri-o://caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214931 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" containerID="cri-o://29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.223846 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.223916 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.233921 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.246921 5039 scope.go:117] "RemoveContainer" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.255182 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": container with ID starting with e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8 not found: ID does not exist" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.255221 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} err="failed to get container status \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": rpc error: code = NotFound desc = could not find container \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": container with ID starting with e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.255246 5039 scope.go:117] "RemoveContainer" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.259500 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.282818 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.282868 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291211 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291239 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291248 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291256 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291264 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.318159 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.318403 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" containerID="cri-o://cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.446470 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.446740 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" containerID="cri-o://ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.467548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.467580 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"22d19fd19c4fbae481b8aa497c81ec911e059d516140cc0916d71ede4707f6ac"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.489744 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.493816 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-hhbtz" event={"ID":"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294","Type":"ContainerStarted","Data":"5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.504346 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.510928 5039 generic.go:334] "Generic (PLEG): container finished" podID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerID="094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.510955 5039 generic.go:334] "Generic (PLEG): container finished" podID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerID="84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.511037 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.511073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.513238 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-zr44j" event={"ID":"9c8f6794-a2c1-4d54-a048-71db0a14213e","Type":"ContainerDied","Data":"51f62d64c11b2f8e97e81e05d2c7367910468d8f8b8206ae9ad4cf991e1bb34e"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.513439 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.553649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-q94vs" event={"ID":"f26bcd91-af44-4f1f-afca-6db6c3fe5362","Type":"ContainerStarted","Data":"b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.556679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-dm7tt" event={"ID":"71c58c2f-0d3f-4008-8fdd-fcc50307cc31","Type":"ContainerStarted","Data":"bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.563844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.563873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"ff576c7005d28c132146f8d7622e9c25699568a19d4a068a4347fcd5993b44d5"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569139 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569566 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569604 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-content" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-content" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569619 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569625 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569633 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569639 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569650 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569656 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569667 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="mysql-bootstrap" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569673 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="mysql-bootstrap" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569688 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-utilities" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569694 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-utilities" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569704 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569709 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569721 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569726 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569738 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569744 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569755 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569760 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569767 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569772 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569785 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="init" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569790 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="init" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569798 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569805 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569814 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569820 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569832 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569837 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569847 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569853 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570030 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570042 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570050 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570058 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570071 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570079 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570106 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570119 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570128 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570137 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570272 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570284 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570294 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.572270 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.576955 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.577135 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"3f4d71f301631a43e021da03302a7c0831792fa18e92bc206ad16b4f64e076bf"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.579413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.583233 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.590194 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.596385 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" exitCode=2 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.596459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.601695 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.610193 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611034 5039 generic.go:334] "Generic (PLEG): container finished" podID="c29afae4-9445-4472-b93b-5a111a886b9a" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611084 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611105 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"690883ae8a994ffd96caf77a50054a169cab6a25a2f983c92bfa6a0937104bb5"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611177 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.614837 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.615595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-2qxp2" event={"ID":"bc51df5b-e54d-457e-af37-671db12ee0bd","Type":"ContainerStarted","Data":"b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.632211 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-rjc76" event={"ID":"860591fe-67b6-4a2e-b8f1-29556c8ef320","Type":"ContainerStarted","Data":"75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.634945 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.642422 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.642677 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7467d89c49-kfwss" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" containerID="cri-o://fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.649721 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.649775 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.652636 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.666449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.667818 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.670593 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerStarted","Data":"c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.686709 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:46756->10.217.0.204:8775: read: connection reset by peer" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.686709 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:46744->10.217.0.204:8775: read: connection reset by peer" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.692168 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.705585 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751305 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751362 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751505 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.751832 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.751876 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:17.251863012 +0000 UTC m=+1461.912544229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.757385 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.757461 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:17.257441052 +0000 UTC m=+1461.918122279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.770621 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:28:16 crc kubenswrapper[5039]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 13:28:16 crc kubenswrapper[5039]: > Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.868260 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.877381 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.922965 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data" (OuterVolumeSpecName: "config-data") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.955870 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.955898 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.956081 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.956164 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.956213 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.956198186 +0000 UTC m=+1465.616879413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.264923 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.264984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.265386 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.265433 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.265419389 +0000 UTC m=+1462.926100616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.277450 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.277753 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.277732949 +0000 UTC m=+1462.938414176 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.317157 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" containerID="cri-o://318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" gracePeriod=30 Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.407347 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.423464 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.433868 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.433978 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.685264 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-dm7tt" event={"ID":"71c58c2f-0d3f-4008-8fdd-fcc50307cc31","Type":"ContainerDied","Data":"bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.685301 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.687197 5039 generic.go:334] "Generic (PLEG): container finished" podID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerID="b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace" exitCode=1 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.687304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerDied","Data":"b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.710151 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.710194 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.717487 5039 generic.go:334] "Generic (PLEG): container finished" podID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerID="e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.717587 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.725806 5039 generic.go:334] "Generic (PLEG): container finished" podID="c304bfee-961f-403c-a998-de879eedf9c9" containerID="ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.725911 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerDied","Data":"ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.730368 5039 generic.go:334] "Generic (PLEG): container finished" podID="75292c04-e484-4def-a16f-2d703409e49e" containerID="74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.730478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.732850 5039 generic.go:334] "Generic (PLEG): container finished" podID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerID="257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.732912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736400 5039 generic.go:334] "Generic (PLEG): container finished" podID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerID="1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736428 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736493 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.738362 5039 generic.go:334] "Generic (PLEG): container finished" podID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.738411 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerDied","Data":"15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.739798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-hhbtz" event={"ID":"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294","Type":"ContainerDied","Data":"5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.739827 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.745066 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-q94vs" event={"ID":"f26bcd91-af44-4f1f-afca-6db6c3fe5362","Type":"ContainerDied","Data":"b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.745123 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746751 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerID="cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" exitCode=2 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerDied","Data":"cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerDied","Data":"e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746883 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.749278 5039 generic.go:334] "Generic (PLEG): container finished" podID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerID="c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.749327 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.750304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-2qxp2" event={"ID":"bc51df5b-e54d-457e-af37-671db12ee0bd","Type":"ContainerDied","Data":"b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.750326 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.752734 5039 generic.go:334] "Generic (PLEG): container finished" podID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerID="ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.752777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.755367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-rjc76" event={"ID":"860591fe-67b6-4a2e-b8f1-29556c8ef320","Type":"ContainerDied","Data":"75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.755413 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758750 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758779 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758788 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758870 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758907 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.761502 5039 generic.go:334] "Generic (PLEG): container finished" podID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerID="5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.761561 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.953955 5039 scope.go:117] "RemoveContainer" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.991766 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kh4d2 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e7d3-account-create-update-pslcx" podUID="33b02367-9855-4316-a76b-613d3b6f4946" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.007546 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.012759 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.021138 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.060870 5039 scope.go:117] "RemoveContainer" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.064753 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.068530 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.074767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.082437 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.093137 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.108567 5039 scope.go:117] "RemoveContainer" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.110666 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": container with ID starting with 46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5 not found: ID does not exist" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.110769 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} err="failed to get container status \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": rpc error: code = NotFound desc = could not find container \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": container with ID starting with 46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5 not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.110841 5039 scope.go:117] "RemoveContainer" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.111237 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.117694 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": container with ID starting with eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae not found: ID does not exist" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.117738 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} err="failed to get container status \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": rpc error: code = NotFound desc = could not find container \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": container with ID starting with eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.117762 5039 scope.go:117] "RemoveContainer" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.120640 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": container with ID starting with 7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359 not found: ID does not exist" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.120687 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359"} err="failed to get container status \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": rpc error: code = NotFound desc = could not find container \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": container with ID starting with 7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359 not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.120724 5039 scope.go:117] "RemoveContainer" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.124696 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.129801 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.132532 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.142205 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" path="/var/lib/kubelet/pods/4461ebd9-1119-41a1-94c8-cc453e06c2f3/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.145861 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" path="/var/lib/kubelet/pods/6ce80998-c4b6-49af-b37b-5ed6a510b704/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.146886 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" path="/var/lib/kubelet/pods/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.147573 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" path="/var/lib/kubelet/pods/aaf62f63-8fea-4671-8a36-21ca1d4fbc37/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.149916 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" path="/var/lib/kubelet/pods/bc1a05aa-7803-43a1-9525-fd135af4323a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.150119 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.151616 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" path="/var/lib/kubelet/pods/c29afae4-9445-4472-b93b-5a111a886b9a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.154287 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" path="/var/lib/kubelet/pods/d14a598e-e058-4b9d-8d57-6f0db418de2c/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.155600 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" path="/var/lib/kubelet/pods/d8475d70-6235-43b5-9a15-b4a8bfbab19d/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.172460 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185578 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185616 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185637 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185659 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"860591fe-67b6-4a2e-b8f1-29556c8ef320\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185739 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185763 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185813 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"bc51df5b-e54d-457e-af37-671db12ee0bd\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185831 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185848 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185876 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185928 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185982 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.186000 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"860591fe-67b6-4a2e-b8f1-29556c8ef320\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.187666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs" (OuterVolumeSpecName: "logs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.187899 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"bc51df5b-e54d-457e-af37-671db12ee0bd\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "860591fe-67b6-4a2e-b8f1-29556c8ef320" (UID: "860591fe-67b6-4a2e-b8f1-29556c8ef320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.198636 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts" (OuterVolumeSpecName: "scripts") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.202248 5039 scope.go:117] "RemoveContainer" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71c58c2f-0d3f-4008-8fdd-fcc50307cc31" (UID: "71c58c2f-0d3f-4008-8fdd-fcc50307cc31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203526 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc51df5b-e54d-457e-af37-671db12ee0bd" (UID: "bc51df5b-e54d-457e-af37-671db12ee0bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" (UID: "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.204377 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f26bcd91-af44-4f1f-afca-6db6c3fe5362" (UID: "f26bcd91-af44-4f1f-afca-6db6c3fe5362"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206831 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206859 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206869 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206880 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206889 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206898 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206906 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206915 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.210400 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4" (OuterVolumeSpecName: "kube-api-access-bz9q4") pod "bc51df5b-e54d-457e-af37-671db12ee0bd" (UID: "bc51df5b-e54d-457e-af37-671db12ee0bd"). InnerVolumeSpecName "kube-api-access-bz9q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx" (OuterVolumeSpecName: "kube-api-access-vtxnx") pod "f26bcd91-af44-4f1f-afca-6db6c3fe5362" (UID: "f26bcd91-af44-4f1f-afca-6db6c3fe5362"). InnerVolumeSpecName "kube-api-access-vtxnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224789 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2" (OuterVolumeSpecName: "kube-api-access-pxkr2") pod "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" (UID: "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294"). InnerVolumeSpecName "kube-api-access-pxkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.234503 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv" (OuterVolumeSpecName: "kube-api-access-w2rvv") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "kube-api-access-w2rvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.235752 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.235874 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv" (OuterVolumeSpecName: "kube-api-access-qrrdv") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "kube-api-access-qrrdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.248004 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2" (OuterVolumeSpecName: "kube-api-access-rjkb2") pod "71c58c2f-0d3f-4008-8fdd-fcc50307cc31" (UID: "71c58c2f-0d3f-4008-8fdd-fcc50307cc31"). InnerVolumeSpecName "kube-api-access-rjkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.256277 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x" (OuterVolumeSpecName: "kube-api-access-txt7x") pod "860591fe-67b6-4a2e-b8f1-29556c8ef320" (UID: "860591fe-67b6-4a2e-b8f1-29556c8ef320"). InnerVolumeSpecName "kube-api-access-txt7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308265 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308661 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308731 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308852 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308914 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308932 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309091 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309194 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309212 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309313 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309339 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309422 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309455 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309980 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310049 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310124 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310138 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310150 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310162 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310174 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310186 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310196 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310206 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310216 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.312235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs" (OuterVolumeSpecName: "logs") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.313125 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs" (OuterVolumeSpecName: "logs") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.319475 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.319550 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.319521685 +0000 UTC m=+1464.980202912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.323401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.325891 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.327382 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.327527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.327504899 +0000 UTC m=+1464.988186126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.328308 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs" (OuterVolumeSpecName: "logs") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.332827 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts" (OuterVolumeSpecName: "scripts") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.334269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65" (OuterVolumeSpecName: "kube-api-access-hwr65") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "kube-api-access-hwr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.336596 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv" (OuterVolumeSpecName: "kube-api-access-m9fhv") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-api-access-m9fhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.349465 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts" (OuterVolumeSpecName: "scripts") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361355 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361380 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg" (OuterVolumeSpecName: "kube-api-access-sgmfg") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "kube-api-access-sgmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361476 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9" (OuterVolumeSpecName: "kube-api-access-tqcd9") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "kube-api-access-tqcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.401199 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411920 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411952 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411963 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411973 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411984 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412090 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412101 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412110 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412119 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412127 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412135 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412155 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412163 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412172 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.430473 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.432941 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.448419 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.459635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.481798 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514434 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514471 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514480 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514490 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514501 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.535434 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.587780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data" (OuterVolumeSpecName: "config-data") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.592127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.604324 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data" (OuterVolumeSpecName: "config-data") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.613661 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.619419 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.619922 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620057 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620158 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620235 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.633776 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data" (OuterVolumeSpecName: "config-data") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.656811 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.680249 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.680638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data" (OuterVolumeSpecName: "config-data") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.696545 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.709677 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.727852 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728147 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728159 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728172 5039 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728181 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728191 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.734803 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data" (OuterVolumeSpecName: "config-data") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.736208 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.739608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.756131 5039 scope.go:117] "RemoveContainer" containerID="4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.768982 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.774190 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.776345 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.778440 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerDied","Data":"c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.778473 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782283 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782405 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84b866898f-5xs7l" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" containerID="cri-o://1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782861 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84b866898f-5xs7l" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" containerID="cri-o://efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791577 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791633 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791665 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792185 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dc966f764-886wt" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" containerID="cri-o://12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792290 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dc966f764-886wt" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" containerID="cri-o://dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792423 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.794446 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.794536 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.802971 5039 scope.go:117] "RemoveContainer" containerID="b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.812779 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84b866898f-5xs7l" podStartSLOduration=7.812078659 podStartE2EDuration="7.812078659s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:18.809992843 +0000 UTC m=+1463.470674070" watchObservedRunningTime="2026-01-30 13:28:18.812078659 +0000 UTC m=+1463.472759886" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828746 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828919 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829032 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829387 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829398 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829407 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829417 5039 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829518 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs" (OuterVolumeSpecName: "logs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.841278 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.841304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.860739 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp" (OuterVolumeSpecName: "kube-api-access-m45cp") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "kube-api-access-m45cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.861069 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.862328 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerDied","Data":"08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.862370 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.866158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.868509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.868715 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.872843 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dc966f764-886wt" podStartSLOduration=7.872813655 podStartE2EDuration="7.872813655s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:18.862454788 +0000 UTC m=+1463.523136035" watchObservedRunningTime="2026-01-30 13:28:18.872813655 +0000 UTC m=+1463.533494882" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.875268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.875556 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.877265 5039 scope.go:117] "RemoveContainer" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.895432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.895719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data" (OuterVolumeSpecName: "config-data") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.899329 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerDied","Data":"cfd62b194c55a1c0929aedfd3e56c356bb03ea700fba1fdfbe1bc6d8d0871746"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.899585 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.917806 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.917840 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.919537 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924677 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924806 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" containerID="cri-o://3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924992 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" containerID="cri-o://9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931269 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931304 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931365 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931453 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931515 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931574 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931667 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932095 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932106 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932115 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932123 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932132 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932444 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.933719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.937841 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76" (OuterVolumeSpecName: "kube-api-access-cmt76") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "kube-api-access-cmt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.938216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.939629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data" (OuterVolumeSpecName: "config-data") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.950374 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.950389 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.951301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.951367 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.955870 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.963501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.969217 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts" (OuterVolumeSpecName: "scripts") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.970091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.971224 5039 scope.go:117] "RemoveContainer" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.980997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.981093 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.981751 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.982247 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986091 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986721 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987220 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987247 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987268 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987224 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.017272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b" (OuterVolumeSpecName: "kube-api-access-ztr2b") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "kube-api-access-ztr2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.020152 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.020255 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032633 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032745 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032770 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"fc88f91b-e82d-4937-ad42-d94c3d464b55\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032979 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033044 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"fc88f91b-e82d-4937-ad42-d94c3d464b55\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033112 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033178 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033198 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033593 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033604 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033615 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033624 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033634 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033643 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033659 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033667 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.049279 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts" (OuterVolumeSpecName: "scripts") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.060269 5039 scope.go:117] "RemoveContainer" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062137 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc88f91b-e82d-4937-ad42-d94c3d464b55" (UID: "fc88f91b-e82d-4937-ad42-d94c3d464b55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062685 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062905 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5" (OuterVolumeSpecName: "kube-api-access-t8kp5") pod "fc88f91b-e82d-4937-ad42-d94c3d464b55" (UID: "fc88f91b-e82d-4937-ad42-d94c3d464b55"). InnerVolumeSpecName "kube-api-access-t8kp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.066993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp" (OuterVolumeSpecName: "kube-api-access-z5brp") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "kube-api-access-z5brp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.067244 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": container with ID starting with 46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a not found: ID does not exist" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.067301 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} err="failed to get container status \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": rpc error: code = NotFound desc = could not find container \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": container with ID starting with 46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.067326 5039 scope.go:117] "RemoveContainer" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.069223 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": container with ID starting with cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9 not found: ID does not exist" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.069266 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} err="failed to get container status \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": rpc error: code = NotFound desc = could not find container \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": container with ID starting with cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9 not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.069292 5039 scope.go:117] "RemoveContainer" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.070867 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": container with ID starting with d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673 not found: ID does not exist" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.070892 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} err="failed to get container status \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": rpc error: code = NotFound desc = could not find container \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": container with ID starting with d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673 not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.070908 5039 scope.go:117] "RemoveContainer" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.072579 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": container with ID starting with 099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c not found: ID does not exist" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.072625 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} err="failed to get container status \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": rpc error: code = NotFound desc = could not find container \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": container with ID starting with 099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.072657 5039 scope.go:117] "RemoveContainer" containerID="74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.075802 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs" (OuterVolumeSpecName: "logs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.076270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.076726 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.090904 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.101251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.110292 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt" (OuterVolumeSpecName: "kube-api-access-nznrt") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "kube-api-access-nznrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.114890 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.122856 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podStartSLOduration=8.122834043 podStartE2EDuration="8.122834043s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:19.017778069 +0000 UTC m=+1463.678459296" watchObservedRunningTime="2026-01-30 13:28:19.122834043 +0000 UTC m=+1463.783515270" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.135915 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.136116 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.136229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.139846 5039 scope.go:117] "RemoveContainer" containerID="25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.151790 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159285 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159593 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159617 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159632 5039 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159647 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159672 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159682 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159694 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159706 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159717 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159727 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159738 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.178047 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h" (OuterVolumeSpecName: "kube-api-access-tjn8h") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "kube-api-access-tjn8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.194201 5039 scope.go:117] "RemoveContainer" containerID="5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.218133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.256790 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.272491 5039 scope.go:117] "RemoveContainer" containerID="d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.275745 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.279844 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.282068 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.297335 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.300156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.330193 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data" (OuterVolumeSpecName: "config-data") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.336007 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.359858 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.360509 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data" (OuterVolumeSpecName: "config-data") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376818 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376852 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376862 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376870 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.378379 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.385122 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.397973 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.399872 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.410516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.411738 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.424294 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.441323 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.449504 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.472451 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.474481 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478449 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478476 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478484 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.490395 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.493148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.495846 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data" (OuterVolumeSpecName: "config-data") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.502242 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.507150 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data" (OuterVolumeSpecName: "config-data") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.526788 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.527808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.532034 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.539343 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.545194 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.580182 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580220 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.580247 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:27.580228595 +0000 UTC m=+1472.240909822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580274 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580288 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580298 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.682671 5039 scope.go:117] "RemoveContainer" containerID="ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.692383 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.700066 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.700950 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.706206 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.710521 5039 scope.go:117] "RemoveContainer" containerID="3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.751095 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.751599 5039 scope.go:117] "RemoveContainer" containerID="ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.785451 5039 scope.go:117] "RemoveContainer" containerID="a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.800575 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe59186_82c9_4825_98af_a345318afc40.slice/crio-conmon-318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe59186_82c9_4825_98af_a345318afc40.slice/crio-318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6644cf_01f6_44cf_95d6_3626f4fa57da.slice/crio-1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090e8f7_2d03_4d3e_914b_6672655d35be.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090e8f7_2d03_4d3e_914b_6672655d35be.slice/crio-21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4\": RecentStats: unable to find data in memory cache]" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.822328 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.835408 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.837604 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.837674 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.862568 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7467d89c49-kfwss" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.150:5000/v3\": read tcp 10.217.0.2:37960->10.217.0.150:5000: read: connection reset by peer" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.879407 5039 scope.go:117] "RemoveContainer" containerID="caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883462 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883524 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883553 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883665 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883747 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.884586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.886638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.886836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.887398 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.894263 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c" (OuterVolumeSpecName: "kube-api-access-kmb2c") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "kube-api-access-kmb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.910261 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.928424 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.940993 5039 scope.go:117] "RemoveContainer" containerID="29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.963088 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985448 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985475 5039 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985484 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985492 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985502 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985529 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985538 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985546 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.989684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996641 5039 generic.go:334] "Generic (PLEG): container finished" podID="ffe59186-82c9-4825-98af-a345318afc40" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" exitCode=0 Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996699 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"fc9e57a17f46c28bd4ab8c2bc3ffa3503691a12bb69fc56089bb8a446d4b34d5"} Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996785 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.005615 5039 generic.go:334] "Generic (PLEG): container finished" podID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerID="fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" exitCode=0 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.005672 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerDied","Data":"fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.028769 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.043310 5039 generic.go:334] "Generic (PLEG): container finished" podID="749976f6-833a-4563-992a-f639cb1552e0" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.043397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.044879 5039 generic.go:334] "Generic (PLEG): container finished" podID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.044916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048437 5039 generic.go:334] "Generic (PLEG): container finished" podID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" exitCode=0 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048477 5039 generic.go:334] "Generic (PLEG): container finished" podID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048540 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"22d19fd19c4fbae481b8aa497c81ec911e059d516140cc0916d71ede4707f6ac"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048736 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048552 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048807 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048849 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.049000 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.077082 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.085370 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.085196 5039 scope.go:117] "RemoveContainer" containerID="031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.086674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087699 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087857 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087966 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.088002 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.088728 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.090754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs" (OuterVolumeSpecName: "logs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.109166 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx" (OuterVolumeSpecName: "kube-api-access-4txlx") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "kube-api-access-4txlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.115723 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.148191 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.153252 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data" (OuterVolumeSpecName: "config-data") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.168462 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.187902 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" path="/var/lib/kubelet/pods/03ea6fff-3bc2-4830-b1f5-53d20cd2a801/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.188152 5039 scope.go:117] "RemoveContainer" containerID="c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.188914 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" path="/var/lib/kubelet/pods/157fc077-2a87-4a57-b9a1-728b9acba2a1/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196100 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196307 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196317 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196326 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196334 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196342 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.201508 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" path="/var/lib/kubelet/pods/2090e8f7-2d03-4d3e-914b-6672655d35be/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.203576 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" path="/var/lib/kubelet/pods/2f6644cf-01f6-44cf-95d6-3626f4fa57da/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.204391 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" path="/var/lib/kubelet/pods/498ddd50-96b8-491c-92e9-8c98bc7fa123/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.204952 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c58c2f-0d3f-4008-8fdd-fcc50307cc31" path="/var/lib/kubelet/pods/71c58c2f-0d3f-4008-8fdd-fcc50307cc31/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.205905 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75292c04-e484-4def-a16f-2d703409e49e" path="/var/lib/kubelet/pods/75292c04-e484-4def-a16f-2d703409e49e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.206613 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860591fe-67b6-4a2e-b8f1-29556c8ef320" path="/var/lib/kubelet/pods/860591fe-67b6-4a2e-b8f1-29556c8ef320/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.207106 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" path="/var/lib/kubelet/pods/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208161 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8f6794-a2c1-4d54-a048-71db0a14213e" path="/var/lib/kubelet/pods/9c8f6794-a2c1-4d54-a048-71db0a14213e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208483 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" path="/var/lib/kubelet/pods/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208832 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc51df5b-e54d-457e-af37-671db12ee0bd" path="/var/lib/kubelet/pods/bc51df5b-e54d-457e-af37-671db12ee0bd/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.209213 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c304bfee-961f-403c-a998-de879eedf9c9" path="/var/lib/kubelet/pods/c304bfee-961f-403c-a998-de879eedf9c9/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.210374 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26bcd91-af44-4f1f-afca-6db6c3fe5362" path="/var/lib/kubelet/pods/f26bcd91-af44-4f1f-afca-6db6c3fe5362/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.210716 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" path="/var/lib/kubelet/pods/f4f0006e-6034-4c12-a12e-f2d7767a77cb/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.211312 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe59186-82c9-4825-98af-a345318afc40" path="/var/lib/kubelet/pods/ffe59186-82c9-4825-98af-a345318afc40/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212204 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212223 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212238 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212249 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.216073 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.227856 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.233589 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.239020 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.248483 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.252988 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.266758 5039 scope.go:117] "RemoveContainer" containerID="8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.297992 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.298043 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.298053 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.298751 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.300369 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.301551 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.301599 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.569827 5039 scope.go:117] "RemoveContainer" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.571269 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.586790 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.590749 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.613076 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.622529 5039 scope.go:117] "RemoveContainer" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643700 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643749 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643841 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643864 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643905 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644001 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644059 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644098 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644126 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644162 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644186 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644215 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.645273 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648131 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc" (OuterVolumeSpecName: "kube-api-access-pg6zc") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "kube-api-access-pg6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts" (OuterVolumeSpecName: "scripts") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649079 5039 scope.go:117] "RemoveContainer" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649133 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649469 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.649538 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": container with ID starting with 318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f not found: ID does not exist" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649566 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} err="failed to get container status \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": rpc error: code = NotFound desc = could not find container \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": container with ID starting with 318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649591 5039 scope.go:117] "RemoveContainer" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.651135 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": container with ID starting with 8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef not found: ID does not exist" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651168 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} err="failed to get container status \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": rpc error: code = NotFound desc = could not find container \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": container with ID starting with 8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651185 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651724 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info" (OuterVolumeSpecName: "pod-info") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.657597 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.663290 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.674098 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.674812 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data" (OuterVolumeSpecName: "config-data") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.675099 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data" (OuterVolumeSpecName: "config-data") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.677207 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699186 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.699594 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699622 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} err="failed to get container status \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699641 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.699946 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699980 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} err="failed to get container status \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700004 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700266 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} err="failed to get container status \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700280 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700553 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} err="failed to get container status \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.705326 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.708915 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf" (OuterVolumeSpecName: "server-conf") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.726328 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745240 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745308 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745356 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745424 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745446 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745467 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745536 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745560 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745593 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745765 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746063 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746103 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746094 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746114 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746140 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746163 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746176 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746187 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746199 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746207 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746216 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746224 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746233 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746244 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746271 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746284 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746293 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746301 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.748617 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info" (OuterVolumeSpecName: "pod-info") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: W0130 13:28:20.748703 5039 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/60ae3d16-d381-4891-901f-e2d07d3a7720/volumes/kubernetes.io~secret/public-tls-certs Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.748713 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.751338 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.753138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j" (OuterVolumeSpecName: "kube-api-access-trv8j") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "kube-api-access-trv8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.753169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46" (OuterVolumeSpecName: "kube-api-access-29m46") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "kube-api-access-29m46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.754502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.756745 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.756785 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.772403 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.773645 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data" (OuterVolumeSpecName: "config-data") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.799692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.817127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf" (OuterVolumeSpecName: "server-conf") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.825056 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1c7913a5-4818-4edd-a390-61d79c64a30b/ovn-northd/0.log" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.825121 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847303 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847367 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847432 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847462 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847485 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847499 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847544 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847950 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.848375 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config" (OuterVolumeSpecName: "config") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.848983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts" (OuterVolumeSpecName: "scripts") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850164 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850196 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850398 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850412 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850422 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850432 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850442 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850450 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850458 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850509 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850520 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850570 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850581 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850590 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850599 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850659 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850670 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.851735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n" (OuterVolumeSpecName: "kube-api-access-hzw7n") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "kube-api-access-hzw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.865380 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.870327 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.894170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.926190 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.946622 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951482 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951504 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951515 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951525 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951533 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951541 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059345 5039 generic.go:334] "Generic (PLEG): container finished" podID="31674257-f143-40ab-97b9-dbf3153277c3" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" exitCode=0 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059398 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059436 5039 scope.go:117] "RemoveContainer" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059555 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067793 5039 generic.go:334] "Generic (PLEG): container finished" podID="106954f5-3ea7-4564-8479-407ef02320b7" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" exitCode=0 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067941 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067958 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.068373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.070421 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerDied","Data":"fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.070527 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.088182 5039 scope.go:117] "RemoveContainer" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091054 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1c7913a5-4818-4edd-a390-61d79c64a30b/ovn-northd/0.log" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091243 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" exitCode=139 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091343 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"6eb99b8efc985784fe2897360ff7becef50a7e77036fc7511f352a6d9ddaf281"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091578 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.107577 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.125197 5039 scope.go:117] "RemoveContainer" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.128314 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": container with ID starting with 7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20 not found: ID does not exist" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128384 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} err="failed to get container status \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": rpc error: code = NotFound desc = could not find container \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": container with ID starting with 7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128416 5039 scope.go:117] "RemoveContainer" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128598 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.129348 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": container with ID starting with 06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6 not found: ID does not exist" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.129410 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} err="failed to get container status \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": rpc error: code = NotFound desc = could not find container \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": container with ID starting with 06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.129430 5039 scope.go:117] "RemoveContainer" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.139114 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.150899 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.204681 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.210099 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.214137 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.214153 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.224551 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.225134 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.227084 5039 scope.go:117] "RemoveContainer" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.227249 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.227290 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.229178 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.231210 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.231261 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.236083 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.259867 5039 scope.go:117] "RemoveContainer" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.260686 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": container with ID starting with 3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a not found: ID does not exist" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.260712 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} err="failed to get container status \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": rpc error: code = NotFound desc = could not find container \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": container with ID starting with 3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.260733 5039 scope.go:117] "RemoveContainer" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.262081 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": container with ID starting with d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058 not found: ID does not exist" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.262115 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} err="failed to get container status \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": rpc error: code = NotFound desc = could not find container \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": container with ID starting with d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.262129 5039 scope.go:117] "RemoveContainer" containerID="fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.266262 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:28:21 crc kubenswrapper[5039]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 13:28:21 crc kubenswrapper[5039]: > Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.273875 5039 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 13:28:21 crc kubenswrapper[5039]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sqvrc" message=< Jan 30 13:28:21 crc kubenswrapper[5039]: Exiting ovn-controller (1) [FAILED] Jan 30 13:28:21 crc kubenswrapper[5039]: Killing ovn-controller (1) [ OK ] Jan 30 13:28:21 crc kubenswrapper[5039]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 30 13:28:21 crc kubenswrapper[5039]: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.274178 5039 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 13:28:21 crc kubenswrapper[5039]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" containerID="cri-o://75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.274328 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" containerID="cri-o://75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" gracePeriod=22 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.626298 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.629022 5039 scope.go:117] "RemoveContainer" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.669126 5039 scope.go:117] "RemoveContainer" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.762883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763479 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.767816 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs" (OuterVolumeSpecName: "logs") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.769332 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x" (OuterVolumeSpecName: "kube-api-access-42b5x") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "kube-api-access-42b5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sqvrc_d4aa0600-fb12-4641-96a3-26cb56853bd3/ovn-controller/0.log" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771260 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771375 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.774742 5039 scope.go:117] "RemoveContainer" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.778397 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": container with ID starting with 10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31 not found: ID does not exist" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.778430 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} err="failed to get container status \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": rpc error: code = NotFound desc = could not find container \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": container with ID starting with 10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.778451 5039 scope.go:117] "RemoveContainer" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.780764 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": container with ID starting with 2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca not found: ID does not exist" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.780809 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} err="failed to get container status \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": rpc error: code = NotFound desc = could not find container \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": container with ID starting with 2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.806221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data" (OuterVolumeSpecName: "config-data") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.820102 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.842679 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869458 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869499 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869513 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869523 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869535 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.958153 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970740 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970849 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run" (OuterVolumeSpecName: "var-run") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970868 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970914 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970975 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971093 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971122 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971181 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971367 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973500 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts" (OuterVolumeSpecName: "scripts") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973729 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973746 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973757 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973766 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.986514 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm" (OuterVolumeSpecName: "kube-api-access-lngcm") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "kube-api-access-lngcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.986555 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n" (OuterVolumeSpecName: "kube-api-access-9rv9n") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "kube-api-access-9rv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.993157 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.993734 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.004225 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data" (OuterVolumeSpecName: "config-data") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.028668 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076694 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077458 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077503 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077530 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077556 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077580 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.080586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr" (OuterVolumeSpecName: "kube-api-access-56kwr") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "kube-api-access-56kwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.100317 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data" (OuterVolumeSpecName: "config-data") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.105362 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106954f5-3ea7-4564-8479-407ef02320b7" path="/var/lib/kubelet/pods/106954f5-3ea7-4564-8479-407ef02320b7/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.106102 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" path="/var/lib/kubelet/pods/1c7913a5-4818-4edd-a390-61d79c64a30b/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107270 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" path="/var/lib/kubelet/pods/2125aae4-cb1a-4329-ba0a-68cc3661427b/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107341 5039 generic.go:334] "Generic (PLEG): container finished" podID="798d080c-2565-4410-9cda-220d1154b8de" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107546 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108073 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31674257-f143-40ab-97b9-dbf3153277c3" path="/var/lib/kubelet/pods/31674257-f143-40ab-97b9-dbf3153277c3/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108544 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b02367-9855-4316-a76b-613d3b6f4946" path="/var/lib/kubelet/pods/33b02367-9855-4316-a76b-613d3b6f4946/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108961 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" path="/var/lib/kubelet/pods/3db29a95-0ed6-4366-8036-388eea4d00b6/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.110265 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" path="/var/lib/kubelet/pods/4f7023ce-3b22-4301-8535-b51dae5ffc85/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.111194 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" path="/var/lib/kubelet/pods/60ae3d16-d381-4891-901f-e2d07d3a7720/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.112038 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" path="/var/lib/kubelet/pods/f6a7de18-5bf6-4275-b6db-f19701d07001/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114079 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sqvrc_d4aa0600-fb12-4641-96a3-26cb56853bd3/ovn-controller/0.log" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114175 5039 generic.go:334] "Generic (PLEG): container finished" podID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" exitCode=137 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114294 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" path="/var/lib/kubelet/pods/fc88f91b-e82d-4937-ad42-d94c3d464b55/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114352 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.118726 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119135 5039 generic.go:334] "Generic (PLEG): container finished" podID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119164 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerDied","Data":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerDied","Data":"ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerDied","Data":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119244 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerDied","Data":"c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119255 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119269 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119296 5039 scope.go:117] "RemoveContainer" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119327 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122697 5039 generic.go:334] "Generic (PLEG): container finished" podID="266dbee0-3c74-4820-8165-1955c6ca832a" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122803 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerDied","Data":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122905 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerDied","Data":"4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122999 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.156335 5039 scope.go:117] "RemoveContainer" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.157210 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": container with ID starting with c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e not found: ID does not exist" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.157244 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} err="failed to get container status \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": rpc error: code = NotFound desc = could not find container \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": container with ID starting with c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.157267 5039 scope.go:117] "RemoveContainer" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.177856 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.181000 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.182969 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.182998 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.188924 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.188808 5039 scope.go:117] "RemoveContainer" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.205231 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": container with ID starting with 75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1 not found: ID does not exist" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.205328 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} err="failed to get container status \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": rpc error: code = NotFound desc = could not find container \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": container with ID starting with 75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.205383 5039 scope.go:117] "RemoveContainer" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.236457 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.249081 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.257373 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.258321 5039 scope.go:117] "RemoveContainer" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.262726 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.334996 5039 scope.go:117] "RemoveContainer" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.336161 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": container with ID starting with b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06 not found: ID does not exist" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336220 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} err="failed to get container status \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": rpc error: code = NotFound desc = could not find container \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": container with ID starting with b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336256 5039 scope.go:117] "RemoveContainer" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.336745 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": container with ID starting with 999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615 not found: ID does not exist" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336772 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} err="failed to get container status \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": rpc error: code = NotFound desc = could not find container \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": container with ID starting with 999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336790 5039 scope.go:117] "RemoveContainer" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.366888 5039 scope.go:117] "RemoveContainer" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.369757 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": container with ID starting with edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7 not found: ID does not exist" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.369811 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} err="failed to get container status \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": rpc error: code = NotFound desc = could not find container \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": container with ID starting with edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.434399 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.446723 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.565123 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.565207 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": context deadline exceeded" Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.700421 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.104:11211: i/o timeout" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.111255 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" path="/var/lib/kubelet/pods/266dbee0-3c74-4820-8165-1955c6ca832a/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.112330 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" path="/var/lib/kubelet/pods/48be0b7f-4cb1-4c00-851a-7078ed9ccab0/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.113964 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798d080c-2565-4410-9cda-220d1154b8de" path="/var/lib/kubelet/pods/798d080c-2565-4410-9cda-220d1154b8de/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.115753 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" path="/var/lib/kubelet/pods/d4aa0600-fb12-4641-96a3-26cb56853bd3/volumes" Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.204613 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205372 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205479 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205952 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.206001 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.207185 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.209309 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.209356 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.245244 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerID="9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" exitCode=0 Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.245324 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674"} Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.601876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790160 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790227 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790252 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.791124 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.791228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.797628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.798520 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4" (OuterVolumeSpecName: "kube-api-access-trxg4") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "kube-api-access-trxg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.866983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.878360 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.889087 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config" (OuterVolumeSpecName: "config") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.892543 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893076 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: W0130 13:28:28.893234 5039 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bc1469b7-cba0-47a5-b2cb-02e374f749da/volumes/kubernetes.io~secret/public-tls-certs Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893413 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893431 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893446 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893457 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893482 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893493 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.901317 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.994288 5039 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.257832 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"68ca238552f48a2278287e46aa748e56a5416468365b8a491b7c39c3f968cdf3"} Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.257927 5039 scope.go:117] "RemoveContainer" containerID="a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.259212 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.287267 5039 scope.go:117] "RemoveContainer" containerID="9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.316143 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.321308 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:30 crc kubenswrapper[5039]: I0130 13:28:30.110071 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" path="/var/lib/kubelet/pods/bc1469b7-cba0-47a5-b2cb-02e374f749da/volumes" Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.204696 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.205364 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.205987 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.206066 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.206563 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.214485 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.216793 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.216857 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.203916 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205308 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205784 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205891 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205889 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.208219 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.210975 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.211229 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.742976 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.743118 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.743196 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.744173 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.744511 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" gracePeriod=600 Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419698 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" exitCode=0 Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419956 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419973 5039 scope.go:117] "RemoveContainer" containerID="119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.204089 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.206960 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.207056 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.208304 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.208393 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.210121 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.212262 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.212333 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.496850 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z6nkm_953eeac5-b943-4036-be33-58eb347c04ef/ovs-vswitchd/0.log" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.498940 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" exitCode=137 Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499075 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499185 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.520462 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" exitCode=137 Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.520505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.526788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z6nkm_953eeac5-b943-4036-be33-58eb347c04ef/ovs-vswitchd/0.log" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.528621 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643402 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643488 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643504 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib" (OuterVolumeSpecName: "var-lib") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643597 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run" (OuterVolumeSpecName: "var-run") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643616 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643704 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643817 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log" (OuterVolumeSpecName: "var-log") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644444 5039 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644464 5039 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644474 5039 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644481 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.645472 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts" (OuterVolumeSpecName: "scripts") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.654354 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74" (OuterVolumeSpecName: "kube-api-access-7mv74") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "kube-api-access-7mv74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.745526 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.745564 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.810714 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846776 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846932 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846964 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847077 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847107 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847177 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.848105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache" (OuterVolumeSpecName: "cache") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.852957 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock" (OuterVolumeSpecName: "lock") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.852926 5039 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.853132 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.857936 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h" (OuterVolumeSpecName: "kube-api-access-9tm5h") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "kube-api-access-9tm5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.859203 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954254 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954322 5039 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954381 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954399 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.976566 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.056522 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.184208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.259174 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542235 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542246 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e"} Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542344 5039 scope.go:117] "RemoveContainer" containerID="b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542403 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.585557 5039 scope.go:117] "RemoveContainer" containerID="f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.589767 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.614339 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.621719 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.627793 5039 scope.go:117] "RemoveContainer" containerID="15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.628482 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.651302 5039 scope.go:117] "RemoveContainer" containerID="5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.670568 5039 scope.go:117] "RemoveContainer" containerID="ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.690272 5039 scope.go:117] "RemoveContainer" containerID="5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.705569 5039 scope.go:117] "RemoveContainer" containerID="154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.729811 5039 scope.go:117] "RemoveContainer" containerID="eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.758800 5039 scope.go:117] "RemoveContainer" containerID="a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.802509 5039 scope.go:117] "RemoveContainer" containerID="b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.830354 5039 scope.go:117] "RemoveContainer" containerID="29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.853175 5039 scope.go:117] "RemoveContainer" containerID="4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.871887 5039 scope.go:117] "RemoveContainer" containerID="fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.896122 5039 scope.go:117] "RemoveContainer" containerID="488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.922896 5039 scope.go:117] "RemoveContainer" containerID="ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" Jan 30 13:28:46 crc kubenswrapper[5039]: I0130 13:28:46.110534 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ada089a-5096-4658-829e-46ed96867c7e" path="/var/lib/kubelet/pods/8ada089a-5096-4658-829e-46ed96867c7e/volumes" Jan 30 13:28:46 crc kubenswrapper[5039]: I0130 13:28:46.114822 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953eeac5-b943-4036-be33-58eb347c04ef" path="/var/lib/kubelet/pods/953eeac5-b943-4036-be33-58eb347c04ef/volumes" Jan 30 13:28:47 crc kubenswrapper[5039]: I0130 13:28:47.934218 5039 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9c8f6794-a2c1-4d54-a048-71db0a14213e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9c8f6794-a2c1-4d54-a048-71db0a14213e] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9c8f6794_a2c1_4d54_a048_71db0a14213e.slice" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.198679 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256672 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256725 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256788 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.257512 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs" (OuterVolumeSpecName: "logs") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.262421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.271840 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2" (OuterVolumeSpecName: "kube-api-access-d2dx2") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "kube-api-access-d2dx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.283046 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.300956 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.306241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data" (OuterVolumeSpecName: "config-data") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358243 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358355 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358404 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358571 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358583 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358593 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358601 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358609 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.359579 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs" (OuterVolumeSpecName: "logs") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.361550 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw" (OuterVolumeSpecName: "kube-api-access-j7tkw") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "kube-api-access-j7tkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.362147 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.380641 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.409239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data" (OuterVolumeSpecName: "config-data") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460566 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460620 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460662 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460681 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606291 5039 generic.go:334] "Generic (PLEG): container finished" podID="749976f6-833a-4563-992a-f639cb1552e0" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" exitCode=137 Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606477 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606561 5039 scope.go:117] "RemoveContainer" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606484 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"ff576c7005d28c132146f8d7622e9c25699568a19d4a068a4347fcd5993b44d5"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.609969 5039 generic.go:334] "Generic (PLEG): container finished" podID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" exitCode=137 Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610005 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610039 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"3f4d71f301631a43e021da03302a7c0831792fa18e92bc206ad16b4f64e076bf"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610800 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.633773 5039 scope.go:117] "RemoveContainer" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.657551 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.664918 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.711197 5039 scope.go:117] "RemoveContainer" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.728219 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": container with ID starting with 9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f not found: ID does not exist" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.728281 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} err="failed to get container status \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": rpc error: code = NotFound desc = could not find container \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": container with ID starting with 9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.728308 5039 scope.go:117] "RemoveContainer" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.729460 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": container with ID starting with 3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4 not found: ID does not exist" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.729478 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} err="failed to get container status \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": rpc error: code = NotFound desc = could not find container \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": container with ID starting with 3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4 not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.729490 5039 scope.go:117] "RemoveContainer" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.752229 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.778129 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.811493 5039 scope.go:117] "RemoveContainer" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836141 5039 scope.go:117] "RemoveContainer" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.836562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": container with ID starting with efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d not found: ID does not exist" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836591 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} err="failed to get container status \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": rpc error: code = NotFound desc = could not find container \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": container with ID starting with efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836612 5039 scope.go:117] "RemoveContainer" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.836861 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": container with ID starting with 1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03 not found: ID does not exist" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836884 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} err="failed to get container status \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": rpc error: code = NotFound desc = could not find container \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": container with ID starting with 1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03 not found: ID does not exist" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.111171 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="749976f6-833a-4563-992a-f639cb1552e0" path="/var/lib/kubelet/pods/749976f6-833a-4563-992a-f639cb1552e0/volumes" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.112557 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" path="/var/lib/kubelet/pods/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663/volumes" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.256270 5039 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3db29a95-0ed6-4366-8036-388eea4d00b6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3db29a95-0ed6-4366-8036-388eea4d00b6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3db29a95_0ed6_4366_8036_388eea4d00b6.slice" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.167889 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169000 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169045 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169073 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169085 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169101 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169113 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169131 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169146 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169174 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169189 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169205 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169219 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169250 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169263 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169287 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169323 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169335 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169356 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169368 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169386 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169398 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169421 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169438 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169460 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169476 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169496 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169512 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169533 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169549 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169567 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169603 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169618 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169634 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169649 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169680 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169695 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169719 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169734 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169762 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169777 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169802 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169817 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169845 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169858 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169875 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169889 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169911 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169927 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169946 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169961 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169985 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.170004 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171859 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171885 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171910 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171923 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171947 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171959 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171985 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171997 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172038 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172050 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172062 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172073 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172096 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server-init" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172140 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server-init" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172163 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172179 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172209 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172228 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172239 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172260 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172273 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172295 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172307 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172326 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172337 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172358 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="mysql-bootstrap" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172369 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="mysql-bootstrap" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172388 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172401 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172418 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172430 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172450 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172487 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172502 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172522 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172538 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172570 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172592 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172606 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172622 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172634 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172650 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172662 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172684 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172695 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172713 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172724 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172746 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172759 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172784 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172797 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172817 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172829 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172861 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172882 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172915 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172931 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172949 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172964 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172987 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173005 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173058 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173071 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173088 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173100 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173116 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173129 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173146 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173159 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173466 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173487 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173509 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173529 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173552 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173575 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173595 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173609 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173631 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173650 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173685 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173706 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173728 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173742 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173769 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173787 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173801 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173812 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173834 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173867 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173887 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173901 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173921 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173939 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173957 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173982 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174001 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174044 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174060 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174073 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174090 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174110 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174131 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174151 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174167 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174184 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174200 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174213 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174226 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174245 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174260 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174275 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174291 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174305 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174325 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174340 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174355 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174367 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174383 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174399 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174414 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174433 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174446 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174464 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174478 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174494 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174513 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.175278 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.178618 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.178616 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.193747 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342146 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342278 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444241 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.445412 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.462686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.475873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.502055 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.963498 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426315 5039 generic.go:334] "Generic (PLEG): container finished" podID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerID="f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466" exitCode=0 Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426449 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerDied","Data":"f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466"} Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426669 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerStarted","Data":"f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c"} Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.828562 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980995 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.988138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl" (OuterVolumeSpecName: "kube-api-access-cvnbl") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "kube-api-access-cvnbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.989000 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082502 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082873 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082882 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449647 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerDied","Data":"f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c"} Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449697 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449699 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.122058 5039 scope.go:117] "RemoveContainer" containerID="25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.170337 5039 scope.go:117] "RemoveContainer" containerID="16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.203822 5039 scope.go:117] "RemoveContainer" containerID="efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.240229 5039 scope.go:117] "RemoveContainer" containerID="b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.273461 5039 scope.go:117] "RemoveContainer" containerID="8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.314568 5039 scope.go:117] "RemoveContainer" containerID="bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.363764 5039 scope.go:117] "RemoveContainer" containerID="9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.387993 5039 scope.go:117] "RemoveContainer" containerID="05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.423146 5039 scope.go:117] "RemoveContainer" containerID="ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.445128 5039 scope.go:117] "RemoveContainer" containerID="8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.466576 5039 scope.go:117] "RemoveContainer" containerID="760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.490984 5039 scope.go:117] "RemoveContainer" containerID="4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.528410 5039 scope.go:117] "RemoveContainer" containerID="975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.559447 5039 scope.go:117] "RemoveContainer" containerID="2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.591308 5039 scope.go:117] "RemoveContainer" containerID="eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.623052 5039 scope.go:117] "RemoveContainer" containerID="9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.644554 5039 scope.go:117] "RemoveContainer" containerID="f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.662329 5039 scope.go:117] "RemoveContainer" containerID="e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.695886 5039 scope.go:117] "RemoveContainer" containerID="4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.717225 5039 scope.go:117] "RemoveContainer" containerID="a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.740435 5039 scope.go:117] "RemoveContainer" containerID="771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.774582 5039 scope.go:117] "RemoveContainer" containerID="bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.799401 5039 scope.go:117] "RemoveContainer" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.823340 5039 scope.go:117] "RemoveContainer" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" Jan 30 13:30:38 crc kubenswrapper[5039]: I0130 13:30:38.289113 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:30:38 crc kubenswrapper[5039]: I0130 13:30:38.289646 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.349985 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:41 crc kubenswrapper[5039]: E0130 13:30:41.350669 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.350683 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.350862 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.356366 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.374003 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502721 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502966 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.604608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605429 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605099 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.628243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.680769 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:42 crc kubenswrapper[5039]: I0130 13:30:42.152489 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:42 crc kubenswrapper[5039]: W0130 13:30:42.167760 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9c5565_131d_4dcf_8011_27ddb4a75042.slice/crio-130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b WatchSource:0}: Error finding container 130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b: Status 404 returned error can't find the container with id 130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b Jan 30 13:30:42 crc kubenswrapper[5039]: I0130 13:30:42.324485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerStarted","Data":"130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b"} Jan 30 13:30:43 crc kubenswrapper[5039]: I0130 13:30:43.337721 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162" exitCode=0 Jan 30 13:30:43 crc kubenswrapper[5039]: I0130 13:30:43.337779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162"} Jan 30 13:30:45 crc kubenswrapper[5039]: I0130 13:30:45.358212 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf" exitCode=0 Jan 30 13:30:45 crc kubenswrapper[5039]: I0130 13:30:45.358277 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf"} Jan 30 13:30:46 crc kubenswrapper[5039]: I0130 13:30:46.369960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerStarted","Data":"971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad"} Jan 30 13:30:46 crc kubenswrapper[5039]: I0130 13:30:46.402252 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p2fm" podStartSLOduration=2.735697936 podStartE2EDuration="5.402233317s" podCreationTimestamp="2026-01-30 13:30:41 +0000 UTC" firstStartedPulling="2026-01-30 13:30:43.339234835 +0000 UTC m=+1607.999916082" lastFinishedPulling="2026-01-30 13:30:46.005770196 +0000 UTC m=+1610.666451463" observedRunningTime="2026-01-30 13:30:46.397532633 +0000 UTC m=+1611.058213910" watchObservedRunningTime="2026-01-30 13:30:46.402233317 +0000 UTC m=+1611.062914544" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.681699 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.682065 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.722179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:52 crc kubenswrapper[5039]: I0130 13:30:52.491274 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:52 crc kubenswrapper[5039]: I0130 13:30:52.543094 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:54 crc kubenswrapper[5039]: I0130 13:30:54.438409 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p2fm" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" containerID="cri-o://971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" gracePeriod=2 Jan 30 13:30:55 crc kubenswrapper[5039]: I0130 13:30:55.453975 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" exitCode=0 Jan 30 13:30:55 crc kubenswrapper[5039]: I0130 13:30:55.454133 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad"} Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.402876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483678 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b"} Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483749 5039 scope.go:117] "RemoveContainer" containerID="971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483916 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.503383 5039 scope.go:117] "RemoveContainer" containerID="46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.520210 5039 scope.go:117] "RemoveContainer" containerID="c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561525 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561620 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.562513 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities" (OuterVolumeSpecName: "utilities") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.568186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9" (OuterVolumeSpecName: "kube-api-access-zztc9") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "kube-api-access-zztc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.663505 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.663541 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.447529 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.476594 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.730397 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.736790 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:58 crc kubenswrapper[5039]: I0130 13:30:58.102897 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" path="/var/lib/kubelet/pods/aa9c5565-131d-4dcf-8011-27ddb4a75042/volumes" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.227400 5039 scope.go:117] "RemoveContainer" containerID="533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.267917 5039 scope.go:117] "RemoveContainer" containerID="20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.296367 5039 scope.go:117] "RemoveContainer" containerID="2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.318927 5039 scope.go:117] "RemoveContainer" containerID="bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.375949 5039 scope.go:117] "RemoveContainer" containerID="e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.409917 5039 scope.go:117] "RemoveContainer" containerID="704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.443675 5039 scope.go:117] "RemoveContainer" containerID="f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.496847 5039 scope.go:117] "RemoveContainer" containerID="1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.521997 5039 scope.go:117] "RemoveContainer" containerID="50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.551516 5039 scope.go:117] "RemoveContainer" containerID="199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.604824 5039 scope.go:117] "RemoveContainer" containerID="8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.638469 5039 scope.go:117] "RemoveContainer" containerID="e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629" Jan 30 13:31:07 crc kubenswrapper[5039]: I0130 13:31:07.743060 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:31:07 crc kubenswrapper[5039]: I0130 13:31:07.743585 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.849536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850625 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-utilities" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850656 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-utilities" Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850686 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-content" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850699 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-content" Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850720 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850732 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850978 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.853131 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.885157 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.985990 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.986117 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.986156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088711 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088792 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.089302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.089383 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.110216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.184819 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.689527 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648232 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" exitCode=0 Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648317 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499"} Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648786 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerStarted","Data":"39f3c559ee69246f0a9de59eb3f9745d01a17d01812b862ae00de906a715adb2"} Jan 30 13:31:14 crc kubenswrapper[5039]: I0130 13:31:14.670211 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" exitCode=0 Jan 30 13:31:14 crc kubenswrapper[5039]: I0130 13:31:14.670298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14"} Jan 30 13:31:16 crc kubenswrapper[5039]: I0130 13:31:16.691227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerStarted","Data":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} Jan 30 13:31:16 crc kubenswrapper[5039]: I0130 13:31:16.721179 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl92t" podStartSLOduration=3.84784773 podStartE2EDuration="6.721156551s" podCreationTimestamp="2026-01-30 13:31:10 +0000 UTC" firstStartedPulling="2026-01-30 13:31:12.650985022 +0000 UTC m=+1637.311666279" lastFinishedPulling="2026-01-30 13:31:15.524293843 +0000 UTC m=+1640.184975100" observedRunningTime="2026-01-30 13:31:16.71998367 +0000 UTC m=+1641.380664937" watchObservedRunningTime="2026-01-30 13:31:16.721156551 +0000 UTC m=+1641.381837798" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.185545 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.186170 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.261389 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.791856 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.848827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:23 crc kubenswrapper[5039]: I0130 13:31:23.762748 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl92t" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" containerID="cri-o://ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" gracePeriod=2 Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.309958 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427628 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427742 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.428976 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities" (OuterVolumeSpecName: "utilities") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.434275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b" (OuterVolumeSpecName: "kube-api-access-8rq6b") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "kube-api-access-8rq6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.455991 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529277 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529315 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529324 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778331 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" exitCode=0 Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"39f3c559ee69246f0a9de59eb3f9745d01a17d01812b862ae00de906a715adb2"} Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778465 5039 scope.go:117] "RemoveContainer" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778633 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.834430 5039 scope.go:117] "RemoveContainer" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.836071 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.843853 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.883741 5039 scope.go:117] "RemoveContainer" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913058 5039 scope.go:117] "RemoveContainer" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.913722 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": container with ID starting with ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4 not found: ID does not exist" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913760 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} err="failed to get container status \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": rpc error: code = NotFound desc = could not find container \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": container with ID starting with ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4 not found: ID does not exist" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913780 5039 scope.go:117] "RemoveContainer" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.914239 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": container with ID starting with 9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14 not found: ID does not exist" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914264 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14"} err="failed to get container status \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": rpc error: code = NotFound desc = could not find container \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": container with ID starting with 9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14 not found: ID does not exist" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914279 5039 scope.go:117] "RemoveContainer" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.914685 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": container with ID starting with 98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499 not found: ID does not exist" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914707 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499"} err="failed to get container status \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": rpc error: code = NotFound desc = could not find container \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": container with ID starting with 98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499 not found: ID does not exist" Jan 30 13:31:26 crc kubenswrapper[5039]: I0130 13:31:26.117430 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" path="/var/lib/kubelet/pods/eedd8159-2729-4f5c-bbbc-1a08154af011/volumes" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742066 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742642 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742698 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.743419 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.743621 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" gracePeriod=600 Jan 30 13:31:37 crc kubenswrapper[5039]: E0130 13:31:37.874625 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910046 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" exitCode=0 Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910203 5039 scope.go:117] "RemoveContainer" containerID="794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.911316 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:31:37 crc kubenswrapper[5039]: E0130 13:31:37.911909 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:31:49 crc kubenswrapper[5039]: I0130 13:31:49.094158 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:31:49 crc kubenswrapper[5039]: E0130 13:31:49.095446 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:00 crc kubenswrapper[5039]: I0130 13:32:00.093383 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:00 crc kubenswrapper[5039]: E0130 13:32:00.094148 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.861345 5039 scope.go:117] "RemoveContainer" containerID="4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.900070 5039 scope.go:117] "RemoveContainer" containerID="1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.939586 5039 scope.go:117] "RemoveContainer" containerID="257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.975764 5039 scope.go:117] "RemoveContainer" containerID="b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.000856 5039 scope.go:117] "RemoveContainer" containerID="373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.044342 5039 scope.go:117] "RemoveContainer" containerID="84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.063179 5039 scope.go:117] "RemoveContainer" containerID="223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.080230 5039 scope.go:117] "RemoveContainer" containerID="c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.102501 5039 scope.go:117] "RemoveContainer" containerID="81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.129140 5039 scope.go:117] "RemoveContainer" containerID="cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.142659 5039 scope.go:117] "RemoveContainer" containerID="92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.169153 5039 scope.go:117] "RemoveContainer" containerID="094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.187381 5039 scope.go:117] "RemoveContainer" containerID="a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.205364 5039 scope.go:117] "RemoveContainer" containerID="bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.228499 5039 scope.go:117] "RemoveContainer" containerID="a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b" Jan 30 13:32:12 crc kubenswrapper[5039]: I0130 13:32:12.093947 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:12 crc kubenswrapper[5039]: E0130 13:32:12.094864 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:23 crc kubenswrapper[5039]: I0130 13:32:23.094116 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:23 crc kubenswrapper[5039]: E0130 13:32:23.095032 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:36 crc kubenswrapper[5039]: I0130 13:32:36.097914 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:36 crc kubenswrapper[5039]: E0130 13:32:36.098820 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:50 crc kubenswrapper[5039]: I0130 13:32:50.093961 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:50 crc kubenswrapper[5039]: E0130 13:32:50.095042 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:04 crc kubenswrapper[5039]: I0130 13:33:04.094660 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:04 crc kubenswrapper[5039]: E0130 13:33:04.095717 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.421593 5039 scope.go:117] "RemoveContainer" containerID="77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.446895 5039 scope.go:117] "RemoveContainer" containerID="94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.490032 5039 scope.go:117] "RemoveContainer" containerID="cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.507680 5039 scope.go:117] "RemoveContainer" containerID="f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.538861 5039 scope.go:117] "RemoveContainer" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" Jan 30 13:33:18 crc kubenswrapper[5039]: I0130 13:33:18.093676 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:18 crc kubenswrapper[5039]: E0130 13:33:18.094813 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:32 crc kubenswrapper[5039]: I0130 13:33:32.096458 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:32 crc kubenswrapper[5039]: E0130 13:33:32.097666 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:44 crc kubenswrapper[5039]: I0130 13:33:44.095782 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:44 crc kubenswrapper[5039]: E0130 13:33:44.096681 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:55 crc kubenswrapper[5039]: I0130 13:33:55.094936 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:55 crc kubenswrapper[5039]: E0130 13:33:55.095974 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.627868 5039 scope.go:117] "RemoveContainer" containerID="2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.686578 5039 scope.go:117] "RemoveContainer" containerID="890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.706378 5039 scope.go:117] "RemoveContainer" containerID="46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" Jan 30 13:34:09 crc kubenswrapper[5039]: I0130 13:34:09.094057 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:09 crc kubenswrapper[5039]: E0130 13:34:09.094474 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:20 crc kubenswrapper[5039]: I0130 13:34:20.095294 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:20 crc kubenswrapper[5039]: E0130 13:34:20.096340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:31 crc kubenswrapper[5039]: I0130 13:34:31.093221 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:31 crc kubenswrapper[5039]: E0130 13:34:31.094357 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:45 crc kubenswrapper[5039]: I0130 13:34:45.094553 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:45 crc kubenswrapper[5039]: E0130 13:34:45.095728 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:00 crc kubenswrapper[5039]: I0130 13:35:00.093384 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:00 crc kubenswrapper[5039]: E0130 13:35:00.094097 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:07 crc kubenswrapper[5039]: I0130 13:35:07.791989 5039 scope.go:117] "RemoveContainer" containerID="b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace" Jan 30 13:35:12 crc kubenswrapper[5039]: I0130 13:35:12.093571 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:12 crc kubenswrapper[5039]: E0130 13:35:12.094885 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:25 crc kubenswrapper[5039]: I0130 13:35:25.093367 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:25 crc kubenswrapper[5039]: E0130 13:35:25.094918 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:36 crc kubenswrapper[5039]: I0130 13:35:36.110931 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:36 crc kubenswrapper[5039]: E0130 13:35:36.114452 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:48 crc kubenswrapper[5039]: I0130 13:35:48.094515 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:48 crc kubenswrapper[5039]: E0130 13:35:48.095480 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:59 crc kubenswrapper[5039]: I0130 13:35:59.093680 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:59 crc kubenswrapper[5039]: E0130 13:35:59.095895 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:10 crc kubenswrapper[5039]: I0130 13:36:10.094268 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:10 crc kubenswrapper[5039]: E0130 13:36:10.095339 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:23 crc kubenswrapper[5039]: I0130 13:36:23.093069 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:23 crc kubenswrapper[5039]: E0130 13:36:23.093606 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:35 crc kubenswrapper[5039]: I0130 13:36:35.094485 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:35 crc kubenswrapper[5039]: E0130 13:36:35.095846 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:48 crc kubenswrapper[5039]: I0130 13:36:48.094306 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:48 crc kubenswrapper[5039]: I0130 13:36:48.829252 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.512741 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513720 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-utilities" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513738 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-utilities" Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513758 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-content" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513767 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-content" Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513786 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513791 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513953 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.515397 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.531959 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.651954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.652063 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.652205 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754170 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754613 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754651 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.773053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.877606 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.350551 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865344 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815" exitCode=0 Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865414 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815"} Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"4c465e15381ee8bdc0372808275894fd41b36c8efcfaebcbef4694fc2a6f3ad1"} Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.871440 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:36:55 crc kubenswrapper[5039]: I0130 13:36:55.883698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1"} Jan 30 13:36:56 crc kubenswrapper[5039]: I0130 13:36:56.893654 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1" exitCode=0 Jan 30 13:36:56 crc kubenswrapper[5039]: I0130 13:36:56.893771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1"} Jan 30 13:36:57 crc kubenswrapper[5039]: I0130 13:36:57.901389 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5"} Jan 30 13:36:57 crc kubenswrapper[5039]: I0130 13:36:57.916146 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-znzps" podStartSLOduration=2.262444232 podStartE2EDuration="5.916127732s" podCreationTimestamp="2026-01-30 13:36:52 +0000 UTC" firstStartedPulling="2026-01-30 13:36:53.869650196 +0000 UTC m=+1978.530331423" lastFinishedPulling="2026-01-30 13:36:57.523333686 +0000 UTC m=+1982.184014923" observedRunningTime="2026-01-30 13:36:57.915288989 +0000 UTC m=+1982.575970226" watchObservedRunningTime="2026-01-30 13:36:57.916127732 +0000 UTC m=+1982.576808959" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.877768 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.879376 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.961931 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:03 crc kubenswrapper[5039]: I0130 13:37:03.035715 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:03 crc kubenswrapper[5039]: I0130 13:37:03.202205 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:04 crc kubenswrapper[5039]: I0130 13:37:04.963749 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-znzps" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" containerID="cri-o://a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" gracePeriod=2 Jan 30 13:37:05 crc kubenswrapper[5039]: I0130 13:37:05.973038 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5"} Jan 30 13:37:05 crc kubenswrapper[5039]: I0130 13:37:05.972992 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" exitCode=0 Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.080725 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263540 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263753 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.265480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities" (OuterVolumeSpecName: "utilities") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.268805 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5" (OuterVolumeSpecName: "kube-api-access-rzrj5") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "kube-api-access-rzrj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.350034 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365554 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365583 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365599 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"4c465e15381ee8bdc0372808275894fd41b36c8efcfaebcbef4694fc2a6f3ad1"} Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991323 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991620 5039 scope.go:117] "RemoveContainer" containerID="a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.024065 5039 scope.go:117] "RemoveContainer" containerID="fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1" Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.036424 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.045604 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.050760 5039 scope.go:117] "RemoveContainer" containerID="1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815" Jan 30 13:37:08 crc kubenswrapper[5039]: I0130 13:37:08.103462 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" path="/var/lib/kubelet/pods/e67969fe-851a-4f02-b96b-3b6d0b5d88f9/volumes" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.176760 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178139 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-content" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178181 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-content" Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-utilities" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178213 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-utilities" Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178238 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178246 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.180403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.186062 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336421 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.437784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438539 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.458458 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.500119 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.756880 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723079 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" exitCode=0 Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8"} Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"9b3ceb73ce3e8ad4f4f6c066cc239a2cb6ed25715406602e6bf446c2fd92021e"} Jan 30 13:38:31 crc kubenswrapper[5039]: I0130 13:38:31.745498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} Jan 30 13:38:32 crc kubenswrapper[5039]: I0130 13:38:32.755441 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" exitCode=0 Jan 30 13:38:32 crc kubenswrapper[5039]: I0130 13:38:32.755492 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} Jan 30 13:38:33 crc kubenswrapper[5039]: I0130 13:38:33.768161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} Jan 30 13:38:33 crc kubenswrapper[5039]: I0130 13:38:33.800839 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7s8j" podStartSLOduration=2.142203819 podStartE2EDuration="5.800797779s" podCreationTimestamp="2026-01-30 13:38:28 +0000 UTC" firstStartedPulling="2026-01-30 13:38:29.728789276 +0000 UTC m=+2074.389470503" lastFinishedPulling="2026-01-30 13:38:33.387383196 +0000 UTC m=+2078.048064463" observedRunningTime="2026-01-30 13:38:33.794047358 +0000 UTC m=+2078.454728605" watchObservedRunningTime="2026-01-30 13:38:33.800797779 +0000 UTC m=+2078.461479016" Jan 30 13:38:39 crc kubenswrapper[5039]: I0130 13:38:39.166960 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:39 crc kubenswrapper[5039]: I0130 13:38:39.172724 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:40 crc kubenswrapper[5039]: I0130 13:38:40.238302 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7s8j" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" probeResult="failure" output=< Jan 30 13:38:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:38:40 crc kubenswrapper[5039]: > Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.575684 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.648680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.829518 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:50 crc kubenswrapper[5039]: I0130 13:38:50.290235 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7s8j" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" containerID="cri-o://95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" gracePeriod=2 Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.244068 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297756 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" exitCode=0 Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297805 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297828 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"9b3ceb73ce3e8ad4f4f6c066cc239a2cb6ed25715406602e6bf446c2fd92021e"} Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297851 5039 scope.go:117] "RemoveContainer" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303790 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303868 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.305304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities" (OuterVolumeSpecName: "utilities") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.315898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m" (OuterVolumeSpecName: "kube-api-access-65p4m") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "kube-api-access-65p4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.326297 5039 scope.go:117] "RemoveContainer" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.351700 5039 scope.go:117] "RemoveContainer" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.381942 5039 scope.go:117] "RemoveContainer" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.382556 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": container with ID starting with 95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f not found: ID does not exist" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.382605 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} err="failed to get container status \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": rpc error: code = NotFound desc = could not find container \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": container with ID starting with 95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.382631 5039 scope.go:117] "RemoveContainer" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.383057 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": container with ID starting with ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38 not found: ID does not exist" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383086 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} err="failed to get container status \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": rpc error: code = NotFound desc = could not find container \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": container with ID starting with ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38 not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383104 5039 scope.go:117] "RemoveContainer" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.383424 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": container with ID starting with 06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8 not found: ID does not exist" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383450 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8"} err="failed to get container status \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": rpc error: code = NotFound desc = could not find container \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": container with ID starting with 06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8 not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.405422 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.405459 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.446245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.507209 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.632346 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.637572 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:52 crc kubenswrapper[5039]: I0130 13:38:52.106229 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" path="/var/lib/kubelet/pods/901397fa-06fa-4a1c-a114-38d9896b664c/volumes" Jan 30 13:39:07 crc kubenswrapper[5039]: I0130 13:39:07.742245 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:39:07 crc kubenswrapper[5039]: I0130 13:39:07.742878 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:39:38 crc kubenswrapper[5039]: I0130 13:39:38.046742 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:39:38 crc kubenswrapper[5039]: I0130 13:39:38.047286 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742003 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742790 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742857 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.743634 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.743737 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" gracePeriod=600 Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362503 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" exitCode=0 Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362883 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362912 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:40:38 crc kubenswrapper[5039]: I0130 13:40:38.879640 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" podUID="faf4f279-399b-4958-9a67-3a94b650bd98" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.53:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.506936 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509562 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509603 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509621 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-utilities" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509628 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-utilities" Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509656 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-content" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509668 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-content" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509881 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.510996 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.520245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700573 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700888 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802771 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.803181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.803242 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.825959 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.845724 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:09 crc kubenswrapper[5039]: I0130 13:41:09.324992 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203000 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" exitCode=0 Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c"} Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerStarted","Data":"8f36989d6255b1a9cbd838b5de3957e9f153329835edfe3656d376913b684245"} Jan 30 13:41:12 crc kubenswrapper[5039]: I0130 13:41:12.223397 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" exitCode=0 Jan 30 13:41:12 crc kubenswrapper[5039]: I0130 13:41:12.223486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee"} Jan 30 13:41:13 crc kubenswrapper[5039]: I0130 13:41:13.239325 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerStarted","Data":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} Jan 30 13:41:13 crc kubenswrapper[5039]: I0130 13:41:13.267406 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5zlrt" podStartSLOduration=2.801662396 podStartE2EDuration="5.267385974s" podCreationTimestamp="2026-01-30 13:41:08 +0000 UTC" firstStartedPulling="2026-01-30 13:41:10.205180708 +0000 UTC m=+2234.865861955" lastFinishedPulling="2026-01-30 13:41:12.670904296 +0000 UTC m=+2237.331585533" observedRunningTime="2026-01-30 13:41:13.259904583 +0000 UTC m=+2237.920585830" watchObservedRunningTime="2026-01-30 13:41:13.267385974 +0000 UTC m=+2237.928067201" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.846858 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.847510 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.914152 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:19 crc kubenswrapper[5039]: I0130 13:41:19.332977 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:19 crc kubenswrapper[5039]: I0130 13:41:19.389312 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.304268 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5zlrt" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" containerID="cri-o://c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" gracePeriod=2 Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.771592 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.933148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.934048 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.934339 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.935584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities" (OuterVolumeSpecName: "utilities") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.943798 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4" (OuterVolumeSpecName: "kube-api-access-p74b4") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "kube-api-access-p74b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.036243 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.036313 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.187316 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.238788 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311595 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" exitCode=0 Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311646 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311640 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311774 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"8f36989d6255b1a9cbd838b5de3957e9f153329835edfe3656d376913b684245"} Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311793 5039 scope.go:117] "RemoveContainer" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.331820 5039 scope.go:117] "RemoveContainer" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.352641 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.358199 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.377726 5039 scope.go:117] "RemoveContainer" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.393938 5039 scope.go:117] "RemoveContainer" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.394341 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": container with ID starting with c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2 not found: ID does not exist" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394378 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} err="failed to get container status \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": rpc error: code = NotFound desc = could not find container \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": container with ID starting with c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2 not found: ID does not exist" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394404 5039 scope.go:117] "RemoveContainer" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.394862 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": container with ID starting with ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee not found: ID does not exist" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394912 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee"} err="failed to get container status \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": rpc error: code = NotFound desc = could not find container \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": container with ID starting with ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee not found: ID does not exist" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394942 5039 scope.go:117] "RemoveContainer" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.395323 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": container with ID starting with 58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c not found: ID does not exist" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.395359 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c"} err="failed to get container status \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": rpc error: code = NotFound desc = could not find container \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": container with ID starting with 58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c not found: ID does not exist" Jan 30 13:41:24 crc kubenswrapper[5039]: I0130 13:41:24.104851 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" path="/var/lib/kubelet/pods/7a900223-911e-47a1-833f-c35a9b09ead7/volumes" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587063 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587711 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587729 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587749 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-utilities" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587757 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-utilities" Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587803 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-content" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587812 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-content" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.588035 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.589238 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.602848 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692216 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692300 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793499 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793547 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793916 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.794263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.814692 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.904352 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:26 crc kubenswrapper[5039]: I0130 13:41:26.379407 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.356468 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df"} Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.356199 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" exitCode=0 Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.357973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerStarted","Data":"85a7a55c36fb573d9a6759b9024ecaa6189e35227c2a90c9aaf7af42b55a5adc"} Jan 30 13:41:28 crc kubenswrapper[5039]: I0130 13:41:28.365929 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" exitCode=0 Jan 30 13:41:28 crc kubenswrapper[5039]: I0130 13:41:28.366062 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c"} Jan 30 13:41:29 crc kubenswrapper[5039]: I0130 13:41:29.379419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerStarted","Data":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} Jan 30 13:41:29 crc kubenswrapper[5039]: I0130 13:41:29.409758 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5dwm" podStartSLOduration=2.870656046 podStartE2EDuration="4.409724973s" podCreationTimestamp="2026-01-30 13:41:25 +0000 UTC" firstStartedPulling="2026-01-30 13:41:27.357991092 +0000 UTC m=+2252.018672329" lastFinishedPulling="2026-01-30 13:41:28.897059989 +0000 UTC m=+2253.557741256" observedRunningTime="2026-01-30 13:41:29.402838217 +0000 UTC m=+2254.063519474" watchObservedRunningTime="2026-01-30 13:41:29.409724973 +0000 UTC m=+2254.070406240" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.904701 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.905117 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.975710 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:36 crc kubenswrapper[5039]: I0130 13:41:36.497835 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:36 crc kubenswrapper[5039]: I0130 13:41:36.761477 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:38 crc kubenswrapper[5039]: I0130 13:41:38.457607 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5dwm" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" containerID="cri-o://ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" gracePeriod=2 Jan 30 13:41:38 crc kubenswrapper[5039]: I0130 13:41:38.920622 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095023 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095273 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.096630 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities" (OuterVolumeSpecName: "utilities") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.102269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp" (OuterVolumeSpecName: "kube-api-access-qjkpp") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "kube-api-access-qjkpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.119439 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197239 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197286 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197304 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469327 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" exitCode=0 Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469387 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469430 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"85a7a55c36fb573d9a6759b9024ecaa6189e35227c2a90c9aaf7af42b55a5adc"} Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469431 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469476 5039 scope.go:117] "RemoveContainer" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.501448 5039 scope.go:117] "RemoveContainer" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.510827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.524735 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.525659 5039 scope.go:117] "RemoveContainer" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.563856 5039 scope.go:117] "RemoveContainer" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.564388 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": container with ID starting with ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665 not found: ID does not exist" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564437 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} err="failed to get container status \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": rpc error: code = NotFound desc = could not find container \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": container with ID starting with ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665 not found: ID does not exist" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564469 5039 scope.go:117] "RemoveContainer" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.564799 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": container with ID starting with 9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c not found: ID does not exist" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564830 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c"} err="failed to get container status \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": rpc error: code = NotFound desc = could not find container \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": container with ID starting with 9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c not found: ID does not exist" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564853 5039 scope.go:117] "RemoveContainer" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.565119 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": container with ID starting with 56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df not found: ID does not exist" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.565150 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df"} err="failed to get container status \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": rpc error: code = NotFound desc = could not find container \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": container with ID starting with 56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df not found: ID does not exist" Jan 30 13:41:40 crc kubenswrapper[5039]: I0130 13:41:40.106570 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" path="/var/lib/kubelet/pods/e196d3e1-fad7-4fb0-889e-a668613a6ffc/volumes" Jan 30 13:42:37 crc kubenswrapper[5039]: I0130 13:42:37.742103 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:42:37 crc kubenswrapper[5039]: I0130 13:42:37.742727 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:07 crc kubenswrapper[5039]: I0130 13:43:07.742537 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:43:07 crc kubenswrapper[5039]: I0130 13:43:07.743000 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.742855 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.743610 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.743721 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.744900 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.745053 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" gracePeriod=600 Jan 30 13:43:38 crc kubenswrapper[5039]: E0130 13:43:38.132176 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469374 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" exitCode=0 Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469474 5039 scope.go:117] "RemoveContainer" containerID="ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469905 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:43:38 crc kubenswrapper[5039]: E0130 13:43:38.470131 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:43:52 crc kubenswrapper[5039]: I0130 13:43:52.093660 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:43:52 crc kubenswrapper[5039]: E0130 13:43:52.094891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:04 crc kubenswrapper[5039]: I0130 13:44:04.094066 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:04 crc kubenswrapper[5039]: E0130 13:44:04.095192 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:19 crc kubenswrapper[5039]: I0130 13:44:19.093749 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:19 crc kubenswrapper[5039]: E0130 13:44:19.094462 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:33 crc kubenswrapper[5039]: I0130 13:44:33.093843 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:33 crc kubenswrapper[5039]: E0130 13:44:33.094679 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:44 crc kubenswrapper[5039]: I0130 13:44:44.093731 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:44 crc kubenswrapper[5039]: E0130 13:44:44.095074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:55 crc kubenswrapper[5039]: I0130 13:44:55.094256 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:55 crc kubenswrapper[5039]: E0130 13:44:55.095103 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.161827 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162552 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162570 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162583 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-utilities" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162592 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-utilities" Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162611 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-content" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162618 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-content" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162768 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.163399 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.166467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.167765 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.184421 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288618 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288812 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.390885 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.391077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.391143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.392169 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.398434 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.422140 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.493613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.906763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222453 5039 generic.go:334] "Generic (PLEG): container finished" podID="7e85d509-7158-47c2-a64b-25b0d8964124" containerID="947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9" exitCode=0 Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerDied","Data":"947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9"} Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222566 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerStarted","Data":"c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca"} Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.483046 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625004 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625160 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625199 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625942 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.631494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.632212 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt" (OuterVolumeSpecName: "kube-api-access-t7fmt") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "kube-api-access-t7fmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726713 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726754 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726770 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.237982 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerDied","Data":"c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca"} Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.238057 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.238142 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.555851 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.563218 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:45:04 crc kubenswrapper[5039]: I0130 13:45:04.108717 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" path="/var/lib/kubelet/pods/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c/volumes" Jan 30 13:45:06 crc kubenswrapper[5039]: I0130 13:45:06.101926 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:06 crc kubenswrapper[5039]: E0130 13:45:06.102797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:08 crc kubenswrapper[5039]: I0130 13:45:08.079884 5039 scope.go:117] "RemoveContainer" containerID="a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0" Jan 30 13:45:20 crc kubenswrapper[5039]: I0130 13:45:20.093723 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:20 crc kubenswrapper[5039]: E0130 13:45:20.094849 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:35 crc kubenswrapper[5039]: I0130 13:45:35.094604 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:35 crc kubenswrapper[5039]: E0130 13:45:35.095830 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:50 crc kubenswrapper[5039]: I0130 13:45:50.093543 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:50 crc kubenswrapper[5039]: E0130 13:45:50.094640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:03 crc kubenswrapper[5039]: I0130 13:46:03.093737 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:03 crc kubenswrapper[5039]: E0130 13:46:03.094505 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:14 crc kubenswrapper[5039]: I0130 13:46:14.094039 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:14 crc kubenswrapper[5039]: E0130 13:46:14.094878 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:26 crc kubenswrapper[5039]: I0130 13:46:26.099269 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:26 crc kubenswrapper[5039]: E0130 13:46:26.100684 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:38 crc kubenswrapper[5039]: I0130 13:46:38.093927 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:38 crc kubenswrapper[5039]: E0130 13:46:38.094801 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:50 crc kubenswrapper[5039]: I0130 13:46:50.093847 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:50 crc kubenswrapper[5039]: E0130 13:46:50.095532 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:05 crc kubenswrapper[5039]: I0130 13:47:05.093908 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:05 crc kubenswrapper[5039]: E0130 13:47:05.094715 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:20 crc kubenswrapper[5039]: I0130 13:47:20.094401 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:20 crc kubenswrapper[5039]: E0130 13:47:20.096613 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:35 crc kubenswrapper[5039]: I0130 13:47:35.094041 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:35 crc kubenswrapper[5039]: E0130 13:47:35.094997 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:49 crc kubenswrapper[5039]: I0130 13:47:49.094554 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:49 crc kubenswrapper[5039]: E0130 13:47:49.095595 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.094627 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:02 crc kubenswrapper[5039]: E0130 13:48:02.096290 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285184 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:02 crc kubenswrapper[5039]: E0130 13:48:02.285609 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285633 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285860 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.287170 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.295406 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456599 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456627 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557405 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.558158 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.558313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.577726 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.613585 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.137059 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.684979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.686106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"26f34934b6e02293b35b501dda500c7a0bbc5788f980c11464b6bb9bf69e7944"} Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.698106 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.698217 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.702185 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:48:07 crc kubenswrapper[5039]: I0130 13:48:07.722111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} Jan 30 13:48:08 crc kubenswrapper[5039]: I0130 13:48:08.732281 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" exitCode=0 Jan 30 13:48:08 crc kubenswrapper[5039]: I0130 13:48:08.732444 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} Jan 30 13:48:09 crc kubenswrapper[5039]: I0130 13:48:09.743039 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.614104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.614428 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.659366 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.684413 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99wzk" podStartSLOduration=5.898851481 podStartE2EDuration="10.684397239s" podCreationTimestamp="2026-01-30 13:48:02 +0000 UTC" firstStartedPulling="2026-01-30 13:48:04.701651252 +0000 UTC m=+2649.362332479" lastFinishedPulling="2026-01-30 13:48:09.48719701 +0000 UTC m=+2654.147878237" observedRunningTime="2026-01-30 13:48:09.771350175 +0000 UTC m=+2654.432031412" watchObservedRunningTime="2026-01-30 13:48:12.684397239 +0000 UTC m=+2657.345078466" Jan 30 13:48:16 crc kubenswrapper[5039]: I0130 13:48:16.100067 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:16 crc kubenswrapper[5039]: E0130 13:48:16.100771 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.655824 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.707452 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.833416 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99wzk" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" containerID="cri-o://e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" gracePeriod=2 Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.841534 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842793 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" exitCode=0 Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842838 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"26f34934b6e02293b35b501dda500c7a0bbc5788f980c11464b6bb9bf69e7944"} Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842893 5039 scope.go:117] "RemoveContainer" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.873331 5039 scope.go:117] "RemoveContainer" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.912410 5039 scope.go:117] "RemoveContainer" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940424 5039 scope.go:117] "RemoveContainer" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.940893 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": container with ID starting with e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91 not found: ID does not exist" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940936 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} err="failed to get container status \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": rpc error: code = NotFound desc = could not find container \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": container with ID starting with e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91 not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940971 5039 scope.go:117] "RemoveContainer" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.941369 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": container with ID starting with 2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da not found: ID does not exist" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941400 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} err="failed to get container status \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": rpc error: code = NotFound desc = could not find container \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": container with ID starting with 2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941421 5039 scope.go:117] "RemoveContainer" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.941656 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": container with ID starting with 3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b not found: ID does not exist" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941685 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} err="failed to get container status \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": rpc error: code = NotFound desc = could not find container \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": container with ID starting with 3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.981822 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.982006 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.982047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.983080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities" (OuterVolumeSpecName: "utilities") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.988392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67" (OuterVolumeSpecName: "kube-api-access-7lr67") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "kube-api-access-7lr67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.037539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083858 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083893 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083902 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.851072 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.871756 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.875681 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:26 crc kubenswrapper[5039]: I0130 13:48:26.103383 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d96125-7059-484f-8688-c72685f10514" path="/var/lib/kubelet/pods/f4d96125-7059-484f-8688-c72685f10514/volumes" Jan 30 13:48:27 crc kubenswrapper[5039]: I0130 13:48:27.093667 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:27 crc kubenswrapper[5039]: E0130 13:48:27.093901 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:39 crc kubenswrapper[5039]: I0130 13:48:39.093292 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:39 crc kubenswrapper[5039]: I0130 13:48:39.978698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.531697 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532693 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532709 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532724 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-utilities" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-utilities" Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532780 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-content" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532788 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-content" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532939 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.534079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.541967 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627762 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729530 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729663 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.730196 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.730223 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.756739 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.884385 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.329688 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611622 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" exitCode=0 Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611853 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643"} Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"8eec54e2e0a26738fec794e4cdf50649961ac4dd42acd5e82de49182a876d701"} Jan 30 13:50:09 crc kubenswrapper[5039]: I0130 13:50:09.630199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} Jan 30 13:50:10 crc kubenswrapper[5039]: I0130 13:50:10.642311 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" exitCode=0 Jan 30 13:50:10 crc kubenswrapper[5039]: I0130 13:50:10.642367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} Jan 30 13:50:13 crc kubenswrapper[5039]: I0130 13:50:13.675340 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} Jan 30 13:50:13 crc kubenswrapper[5039]: I0130 13:50:13.704964 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqw5k" podStartSLOduration=2.666088562 podStartE2EDuration="7.704935702s" podCreationTimestamp="2026-01-30 13:50:06 +0000 UTC" firstStartedPulling="2026-01-30 13:50:07.613033123 +0000 UTC m=+2772.273714350" lastFinishedPulling="2026-01-30 13:50:12.651880253 +0000 UTC m=+2777.312561490" observedRunningTime="2026-01-30 13:50:13.697767519 +0000 UTC m=+2778.358448756" watchObservedRunningTime="2026-01-30 13:50:13.704935702 +0000 UTC m=+2778.365616929" Jan 30 13:50:16 crc kubenswrapper[5039]: I0130 13:50:16.885399 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:16 crc kubenswrapper[5039]: I0130 13:50:16.885827 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:17 crc kubenswrapper[5039]: I0130 13:50:17.930718 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqw5k" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:17 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:17 crc kubenswrapper[5039]: > Jan 30 13:50:26 crc kubenswrapper[5039]: I0130 13:50:26.947083 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:26 crc kubenswrapper[5039]: I0130 13:50:26.997526 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:27 crc kubenswrapper[5039]: I0130 13:50:27.181904 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:28 crc kubenswrapper[5039]: I0130 13:50:28.776916 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqw5k" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" containerID="cri-o://b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" gracePeriod=2 Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.692002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787850 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787944 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"8eec54e2e0a26738fec794e4cdf50649961ac4dd42acd5e82de49182a876d701"} Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787979 5039 scope.go:117] "RemoveContainer" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787973 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.804272 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.805224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.805368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.806447 5039 scope.go:117] "RemoveContainer" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.806506 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities" (OuterVolumeSpecName: "utilities") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.810351 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv" (OuterVolumeSpecName: "kube-api-access-p65jv") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "kube-api-access-p65jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.855834 5039 scope.go:117] "RemoveContainer" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.888286 5039 scope.go:117] "RemoveContainer" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.889144 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": container with ID starting with b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80 not found: ID does not exist" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889205 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} err="failed to get container status \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": rpc error: code = NotFound desc = could not find container \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": container with ID starting with b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80 not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889240 5039 scope.go:117] "RemoveContainer" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.889967 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": container with ID starting with 48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed not found: ID does not exist" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889998 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} err="failed to get container status \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": rpc error: code = NotFound desc = could not find container \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": container with ID starting with 48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.890026 5039 scope.go:117] "RemoveContainer" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.890338 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": container with ID starting with b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643 not found: ID does not exist" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.890360 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643"} err="failed to get container status \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": rpc error: code = NotFound desc = could not find container \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": container with ID starting with b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643 not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.907966 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.908043 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.946153 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.010036 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.130218 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.140115 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:32 crc kubenswrapper[5039]: I0130 13:50:32.102943 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" path="/var/lib/kubelet/pods/fe147c05-03c5-4950-8478-ec6ca26a250b/volumes" Jan 30 13:50:40 crc kubenswrapper[5039]: I0130 13:50:40.092740 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-np244" podUID="9fc67884-3169-4fc2-98e9-1a3a274f9f02" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:40 crc kubenswrapper[5039]: > Jan 30 13:50:40 crc kubenswrapper[5039]: I0130 13:50:40.917702 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-np244" podUID="9fc67884-3169-4fc2-98e9-1a3a274f9f02" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:40 crc kubenswrapper[5039]: > Jan 30 13:51:07 crc kubenswrapper[5039]: I0130 13:51:07.741838 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:51:07 crc kubenswrapper[5039]: I0130 13:51:07.742352 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:51:37 crc kubenswrapper[5039]: I0130 13:51:37.742199 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:51:37 crc kubenswrapper[5039]: I0130 13:51:37.742971 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460167 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460934 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-content" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460946 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-content" Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460977 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460983 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460992 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-utilities" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460998 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-utilities" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.461155 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.463930 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.474038 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592256 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592399 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694098 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694573 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.723132 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.783168 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.086772 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453490 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" exitCode=0 Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea"} Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453883 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerStarted","Data":"7c8439a40a3e45caff96569fbe5aabdc158cce87c83cfc38363ceea9ce61d6c3"} Jan 30 13:51:59 crc kubenswrapper[5039]: I0130 13:51:59.470811 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" exitCode=0 Jan 30 13:51:59 crc kubenswrapper[5039]: I0130 13:51:59.470854 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc"} Jan 30 13:52:01 crc kubenswrapper[5039]: I0130 13:52:01.489392 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerStarted","Data":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} Jan 30 13:52:01 crc kubenswrapper[5039]: I0130 13:52:01.507455 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzqf7" podStartSLOduration=2.497544272 podStartE2EDuration="5.507437785s" podCreationTimestamp="2026-01-30 13:51:56 +0000 UTC" firstStartedPulling="2026-01-30 13:51:57.457651684 +0000 UTC m=+2882.118332921" lastFinishedPulling="2026-01-30 13:52:00.467545167 +0000 UTC m=+2885.128226434" observedRunningTime="2026-01-30 13:52:01.506401947 +0000 UTC m=+2886.167083184" watchObservedRunningTime="2026-01-30 13:52:01.507437785 +0000 UTC m=+2886.168119012" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.783490 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.784195 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.827888 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.130180 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.135056 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.142398 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259277 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.360975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.361697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.361962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.362232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.362332 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.384059 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.459091 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.587303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742524 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742579 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742627 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.743274 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.743327 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" gracePeriod=600 Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.905452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.540859 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" exitCode=0 Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.540915 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.541574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerStarted","Data":"42985f2dce9c84456d9ef812a295a7b21112fa133139cdf68da820cdf813cf0a"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.546950 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" exitCode=0 Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547062 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547136 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:52:09 crc kubenswrapper[5039]: I0130 13:52:09.863797 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:09 crc kubenswrapper[5039]: I0130 13:52:09.864315 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzqf7" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" containerID="cri-o://cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" gracePeriod=2 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.308226 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407702 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.409170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities" (OuterVolumeSpecName: "utilities") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.413569 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x" (OuterVolumeSpecName: "kube-api-access-7989x") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "kube-api-access-7989x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.465431 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509763 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509798 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509809 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.569699 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" exitCode=0 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.569771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574303 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" exitCode=0 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574407 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574429 5039 scope.go:117] "RemoveContainer" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574412 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"7c8439a40a3e45caff96569fbe5aabdc158cce87c83cfc38363ceea9ce61d6c3"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.602658 5039 scope.go:117] "RemoveContainer" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.615626 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.620670 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.627870 5039 scope.go:117] "RemoveContainer" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.645908 5039 scope.go:117] "RemoveContainer" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.646634 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": container with ID starting with cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b not found: ID does not exist" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.646667 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} err="failed to get container status \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": rpc error: code = NotFound desc = could not find container \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": container with ID starting with cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b not found: ID does not exist" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.646690 5039 scope.go:117] "RemoveContainer" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.647493 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": container with ID starting with 2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc not found: ID does not exist" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.647529 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc"} err="failed to get container status \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": rpc error: code = NotFound desc = could not find container \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": container with ID starting with 2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc not found: ID does not exist" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.647571 5039 scope.go:117] "RemoveContainer" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.648175 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": container with ID starting with 40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea not found: ID does not exist" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.648210 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea"} err="failed to get container status \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": rpc error: code = NotFound desc = could not find container \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": container with ID starting with 40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea not found: ID does not exist" Jan 30 13:52:11 crc kubenswrapper[5039]: I0130 13:52:11.582632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerStarted","Data":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} Jan 30 13:52:11 crc kubenswrapper[5039]: I0130 13:52:11.609373 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xs64" podStartSLOduration=2.046027234 podStartE2EDuration="4.609359845s" podCreationTimestamp="2026-01-30 13:52:07 +0000 UTC" firstStartedPulling="2026-01-30 13:52:08.542640125 +0000 UTC m=+2893.203321352" lastFinishedPulling="2026-01-30 13:52:11.105972716 +0000 UTC m=+2895.766653963" observedRunningTime="2026-01-30 13:52:11.606703333 +0000 UTC m=+2896.267384580" watchObservedRunningTime="2026-01-30 13:52:11.609359845 +0000 UTC m=+2896.270041072" Jan 30 13:52:12 crc kubenswrapper[5039]: I0130 13:52:12.102269 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" path="/var/lib/kubelet/pods/a14e8e98-f665-4850-806b-a5ad361662cf/volumes" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.460300 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.460935 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.505751 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.679121 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.741632 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:19 crc kubenswrapper[5039]: I0130 13:52:19.638574 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xs64" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" containerID="cri-o://ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" gracePeriod=2 Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.047541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.158945 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.159158 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.159194 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.160862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities" (OuterVolumeSpecName: "utilities") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.168250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh" (OuterVolumeSpecName: "kube-api-access-wz2qh") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "kube-api-access-wz2qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.189572 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262563 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262653 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262665 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647388 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" exitCode=0 Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"42985f2dce9c84456d9ef812a295a7b21112fa133139cdf68da820cdf813cf0a"} Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647481 5039 scope.go:117] "RemoveContainer" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647491 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.669920 5039 scope.go:117] "RemoveContainer" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.686702 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.691781 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.704130 5039 scope.go:117] "RemoveContainer" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721406 5039 scope.go:117] "RemoveContainer" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.721840 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": container with ID starting with ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3 not found: ID does not exist" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721879 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} err="failed to get container status \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": rpc error: code = NotFound desc = could not find container \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": container with ID starting with ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3 not found: ID does not exist" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721906 5039 scope.go:117] "RemoveContainer" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.722175 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": container with ID starting with c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3 not found: ID does not exist" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722207 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3"} err="failed to get container status \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": rpc error: code = NotFound desc = could not find container \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": container with ID starting with c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3 not found: ID does not exist" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722223 5039 scope.go:117] "RemoveContainer" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.722449 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": container with ID starting with 788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf not found: ID does not exist" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722472 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf"} err="failed to get container status \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": rpc error: code = NotFound desc = could not find container \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": container with ID starting with 788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf not found: ID does not exist" Jan 30 13:52:22 crc kubenswrapper[5039]: I0130 13:52:22.110570 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" path="/var/lib/kubelet/pods/cf1cff45-a762-4c16-9679-0ae02a08149f/volumes" Jan 30 13:54:37 crc kubenswrapper[5039]: I0130 13:54:37.742693 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:54:37 crc kubenswrapper[5039]: I0130 13:54:37.743282 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:07 crc kubenswrapper[5039]: I0130 13:55:07.742458 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:55:07 crc kubenswrapper[5039]: I0130 13:55:07.743151 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.742909 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.743851 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.743930 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.744850 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.744985 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" gracePeriod=600 Jan 30 13:55:37 crc kubenswrapper[5039]: E0130 13:55:37.880373 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.461964 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" exitCode=0 Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462103 5039 scope.go:117] "RemoveContainer" containerID="39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462750 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:55:38 crc kubenswrapper[5039]: E0130 13:55:38.463156 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:55:53 crc kubenswrapper[5039]: I0130 13:55:53.093862 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:55:53 crc kubenswrapper[5039]: E0130 13:55:53.094634 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:06 crc kubenswrapper[5039]: I0130 13:56:06.100090 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:06 crc kubenswrapper[5039]: E0130 13:56:06.101175 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:20 crc kubenswrapper[5039]: I0130 13:56:20.094483 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:20 crc kubenswrapper[5039]: E0130 13:56:20.095339 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:35 crc kubenswrapper[5039]: I0130 13:56:35.094235 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:35 crc kubenswrapper[5039]: E0130 13:56:35.096120 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:49 crc kubenswrapper[5039]: I0130 13:56:49.093095 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:49 crc kubenswrapper[5039]: E0130 13:56:49.093929 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:04 crc kubenswrapper[5039]: I0130 13:57:04.094592 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:04 crc kubenswrapper[5039]: E0130 13:57:04.095916 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:15 crc kubenswrapper[5039]: I0130 13:57:15.094547 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:15 crc kubenswrapper[5039]: E0130 13:57:15.095920 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:30 crc kubenswrapper[5039]: I0130 13:57:30.094893 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:30 crc kubenswrapper[5039]: E0130 13:57:30.096556 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:43 crc kubenswrapper[5039]: I0130 13:57:43.093669 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:43 crc kubenswrapper[5039]: E0130 13:57:43.094568 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:57 crc kubenswrapper[5039]: I0130 13:57:57.093510 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:57 crc kubenswrapper[5039]: E0130 13:57:57.094385 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.337720 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339164 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339183 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339197 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339204 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339230 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339241 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339253 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339260 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339275 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339283 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339290 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339299 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339475 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339501 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.341135 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.352266 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.417825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.417911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.418027 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519713 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519824 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.520570 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.520627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.547291 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.676063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.239850 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557211 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a" exitCode=0 Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557320 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a"} Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerStarted","Data":"2fbd9771d813d2233a001d0c399bd56efdc45b509858cf026ffd0328faca10c9"} Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.558923 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:58:06 crc kubenswrapper[5039]: I0130 13:58:06.576983 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4" exitCode=0 Jan 30 13:58:06 crc kubenswrapper[5039]: I0130 13:58:06.577179 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4"} Jan 30 13:58:07 crc kubenswrapper[5039]: I0130 13:58:07.592869 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerStarted","Data":"1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f"} Jan 30 13:58:07 crc kubenswrapper[5039]: I0130 13:58:07.617182 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6g5b" podStartSLOduration=2.221442524 podStartE2EDuration="4.617160311s" podCreationTimestamp="2026-01-30 13:58:03 +0000 UTC" firstStartedPulling="2026-01-30 13:58:04.558580825 +0000 UTC m=+3249.219262072" lastFinishedPulling="2026-01-30 13:58:06.954298632 +0000 UTC m=+3251.614979859" observedRunningTime="2026-01-30 13:58:07.613483722 +0000 UTC m=+3252.274164959" watchObservedRunningTime="2026-01-30 13:58:07.617160311 +0000 UTC m=+3252.277841528" Jan 30 13:58:08 crc kubenswrapper[5039]: I0130 13:58:08.093757 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:08 crc kubenswrapper[5039]: E0130 13:58:08.093996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.676767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.677053 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.721836 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:14 crc kubenswrapper[5039]: I0130 13:58:14.681839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.311256 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.311518 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6g5b" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" containerID="cri-o://1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" gracePeriod=2 Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.661723 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" exitCode=0 Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.661797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f"} Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.731813 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836719 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836757 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836791 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.837943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities" (OuterVolumeSpecName: "utilities") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.843780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz" (OuterVolumeSpecName: "kube-api-access-dclhz") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "kube-api-access-dclhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.902883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938490 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938523 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938535 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"2fbd9771d813d2233a001d0c399bd56efdc45b509858cf026ffd0328faca10c9"} Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673540 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673855 5039 scope.go:117] "RemoveContainer" containerID="1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.701341 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.701608 5039 scope.go:117] "RemoveContainer" containerID="1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.712097 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.724953 5039 scope.go:117] "RemoveContainer" containerID="812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a" Jan 30 13:58:20 crc kubenswrapper[5039]: I0130 13:58:20.107200 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" path="/var/lib/kubelet/pods/24611f3d-a1dc-4f1d-8949-7cf74e30549b/volumes" Jan 30 13:58:23 crc kubenswrapper[5039]: I0130 13:58:23.095160 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:23 crc kubenswrapper[5039]: E0130 13:58:23.096115 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:36 crc kubenswrapper[5039]: I0130 13:58:36.098123 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:36 crc kubenswrapper[5039]: E0130 13:58:36.098652 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:50 crc kubenswrapper[5039]: I0130 13:58:50.094231 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:50 crc kubenswrapper[5039]: E0130 13:58:50.095074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:02 crc kubenswrapper[5039]: I0130 13:59:02.093781 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:02 crc kubenswrapper[5039]: E0130 13:59:02.094765 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:14 crc kubenswrapper[5039]: I0130 13:59:14.094217 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:14 crc kubenswrapper[5039]: E0130 13:59:14.096195 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:27 crc kubenswrapper[5039]: I0130 13:59:27.093705 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:27 crc kubenswrapper[5039]: E0130 13:59:27.096232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:39 crc kubenswrapper[5039]: I0130 13:59:39.093213 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:39 crc kubenswrapper[5039]: E0130 13:59:39.094062 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:53 crc kubenswrapper[5039]: I0130 13:59:53.094277 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:53 crc kubenswrapper[5039]: E0130 13:59:53.094886 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.169426 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.170905 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-utilities" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.170938 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-utilities" Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.170974 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-content" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.170989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-content" Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.171108 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.171122 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.171386 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.172171 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.174655 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.177422 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.181542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.321719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.321800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.322130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.423797 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.424509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.424844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.426755 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.431859 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.443073 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.501643 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.927262 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.408792 5039 generic.go:334] "Generic (PLEG): container finished" podID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerID="d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4" exitCode=0 Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.408897 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerDied","Data":"d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4"} Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.409117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerStarted","Data":"9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6"} Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.700188 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755281 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755363 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755437 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.756161 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.760223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.760814 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl" (OuterVolumeSpecName: "kube-api-access-72lwl") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "kube-api-access-72lwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858555 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858603 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858616 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerDied","Data":"9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6"} Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423347 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423348 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.789073 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.795150 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 14:00:04 crc kubenswrapper[5039]: I0130 14:00:04.102083 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" path="/var/lib/kubelet/pods/3f9e6068-8847-4733-a7c3-5c448d66b617/volumes" Jan 30 14:00:07 crc kubenswrapper[5039]: I0130 14:00:07.093746 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:07 crc kubenswrapper[5039]: E0130 14:00:07.094340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:08 crc kubenswrapper[5039]: I0130 14:00:08.383358 5039 scope.go:117] "RemoveContainer" containerID="10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.094162 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:22 crc kubenswrapper[5039]: E0130 14:00:22.094894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356428 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:22 crc kubenswrapper[5039]: E0130 14:00:22.356715 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356728 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356890 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.357885 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.379720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428200 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428283 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529806 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529897 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529918 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.530440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.530544 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.554431 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.675729 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.108913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.558792 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" exitCode=0 Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.558909 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2"} Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.559146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"e9cc77458319aecf2f5802b7a7780752acf60d790f349a1e838a494751269b45"} Jan 30 14:00:24 crc kubenswrapper[5039]: I0130 14:00:24.611807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} Jan 30 14:00:25 crc kubenswrapper[5039]: I0130 14:00:25.623618 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" exitCode=0 Jan 30 14:00:25 crc kubenswrapper[5039]: I0130 14:00:25.623665 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} Jan 30 14:00:26 crc kubenswrapper[5039]: I0130 14:00:26.634151 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} Jan 30 14:00:26 crc kubenswrapper[5039]: I0130 14:00:26.663445 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74thf" podStartSLOduration=2.196509186 podStartE2EDuration="4.66342157s" podCreationTimestamp="2026-01-30 14:00:22 +0000 UTC" firstStartedPulling="2026-01-30 14:00:23.560336828 +0000 UTC m=+3388.221018055" lastFinishedPulling="2026-01-30 14:00:26.027249212 +0000 UTC m=+3390.687930439" observedRunningTime="2026-01-30 14:00:26.658132608 +0000 UTC m=+3391.318813875" watchObservedRunningTime="2026-01-30 14:00:26.66342157 +0000 UTC m=+3391.324102817" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.676201 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.679184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.723631 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.093323 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:33 crc kubenswrapper[5039]: E0130 14:00:33.093547 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.719660 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.777705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:35 crc kubenswrapper[5039]: I0130 14:00:35.689342 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74thf" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" containerID="cri-o://2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" gracePeriod=2 Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.259870 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.370924 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.371056 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.371080 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.372172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities" (OuterVolumeSpecName: "utilities") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.379314 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv" (OuterVolumeSpecName: "kube-api-access-wznqv") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "kube-api-access-wznqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.472193 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.472382 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.503241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.573213 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696290 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" exitCode=0 Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696334 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696358 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"e9cc77458319aecf2f5802b7a7780752acf60d790f349a1e838a494751269b45"} Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696374 5039 scope.go:117] "RemoveContainer" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696398 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.716179 5039 scope.go:117] "RemoveContainer" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.756081 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.769607 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.784868 5039 scope.go:117] "RemoveContainer" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817150 5039 scope.go:117] "RemoveContainer" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.817590 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": container with ID starting with 2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac not found: ID does not exist" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817699 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} err="failed to get container status \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": rpc error: code = NotFound desc = could not find container \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": container with ID starting with 2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac not found: ID does not exist" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817789 5039 scope.go:117] "RemoveContainer" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.818187 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": container with ID starting with 96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041 not found: ID does not exist" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818242 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} err="failed to get container status \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": rpc error: code = NotFound desc = could not find container \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": container with ID starting with 96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041 not found: ID does not exist" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818270 5039 scope.go:117] "RemoveContainer" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.818502 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": container with ID starting with fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2 not found: ID does not exist" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818579 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2"} err="failed to get container status \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": rpc error: code = NotFound desc = could not find container \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": container with ID starting with fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2 not found: ID does not exist" Jan 30 14:00:38 crc kubenswrapper[5039]: I0130 14:00:38.103437 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68717116-ffb9-4c4c-821c-65a448014b68" path="/var/lib/kubelet/pods/68717116-ffb9-4c4c-821c-65a448014b68/volumes" Jan 30 14:00:46 crc kubenswrapper[5039]: I0130 14:00:46.098172 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:46 crc kubenswrapper[5039]: I0130 14:00:46.776847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.527712 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528790 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-content" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528810 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-content" Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528826 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-utilities" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528833 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-utilities" Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528846 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528853 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.529039 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.530458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.543050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627663 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627788 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729667 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.730125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.730228 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.756181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.850616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.357690 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.768695 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" exitCode=0 Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.768762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa"} Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.769119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"6bab690e0d780095415b7e73cb6cea165a71081b7e31a7102541db6103c40016"} Jan 30 14:02:56 crc kubenswrapper[5039]: I0130 14:02:56.778798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} Jan 30 14:02:57 crc kubenswrapper[5039]: I0130 14:02:57.792737 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" exitCode=0 Jan 30 14:02:57 crc kubenswrapper[5039]: I0130 14:02:57.793119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} Jan 30 14:02:58 crc kubenswrapper[5039]: I0130 14:02:58.800456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} Jan 30 14:02:58 crc kubenswrapper[5039]: I0130 14:02:58.822675 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxj92" podStartSLOduration=2.4116717 podStartE2EDuration="4.822658281s" podCreationTimestamp="2026-01-30 14:02:54 +0000 UTC" firstStartedPulling="2026-01-30 14:02:55.770171579 +0000 UTC m=+3540.430852806" lastFinishedPulling="2026-01-30 14:02:58.18115815 +0000 UTC m=+3542.841839387" observedRunningTime="2026-01-30 14:02:58.821280724 +0000 UTC m=+3543.481961961" watchObservedRunningTime="2026-01-30 14:02:58.822658281 +0000 UTC m=+3543.483339508" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.851832 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.852383 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.889525 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:05 crc kubenswrapper[5039]: I0130 14:03:05.898942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:05 crc kubenswrapper[5039]: I0130 14:03:05.982874 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.742261 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.742328 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.864913 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxj92" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" containerID="cri-o://f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" gracePeriod=2 Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.244484 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326373 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326451 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326509 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.327603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities" (OuterVolumeSpecName: "utilities") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.332971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh" (OuterVolumeSpecName: "kube-api-access-wrhsh") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "kube-api-access-wrhsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.376755 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428326 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428369 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428379 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874381 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" exitCode=0 Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874460 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874529 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"6bab690e0d780095415b7e73cb6cea165a71081b7e31a7102541db6103c40016"} Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874553 5039 scope.go:117] "RemoveContainer" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.908547 5039 scope.go:117] "RemoveContainer" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.915106 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.921062 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.954861 5039 scope.go:117] "RemoveContainer" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.972553 5039 scope.go:117] "RemoveContainer" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.974395 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": container with ID starting with f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb not found: ID does not exist" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.974440 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} err="failed to get container status \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": rpc error: code = NotFound desc = could not find container \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": container with ID starting with f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb not found: ID does not exist" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.974467 5039 scope.go:117] "RemoveContainer" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.975079 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": container with ID starting with 5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d not found: ID does not exist" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975143 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} err="failed to get container status \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": rpc error: code = NotFound desc = could not find container \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": container with ID starting with 5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d not found: ID does not exist" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975207 5039 scope.go:117] "RemoveContainer" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.975704 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": container with ID starting with faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa not found: ID does not exist" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975766 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa"} err="failed to get container status \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": rpc error: code = NotFound desc = could not find container \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": container with ID starting with faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa not found: ID does not exist" Jan 30 14:03:10 crc kubenswrapper[5039]: I0130 14:03:10.104741 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" path="/var/lib/kubelet/pods/eb69e035-bb19-4881-8e0d-6799360fa05f/volumes" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.165397 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.166912 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-utilities" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.166943 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-utilities" Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.166969 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.166978 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.167047 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-content" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.167057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-content" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.167257 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.169238 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.178050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277439 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379233 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379377 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379867 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379900 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.410291 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.507118 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.959358 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.998522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerStarted","Data":"ff67be788ec0ec990437c11d1243ef4a96d6aad0d42af7424f9593662a6fd679"} Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.006472 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" exitCode=0 Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.006602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f"} Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.008355 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:03:24 crc kubenswrapper[5039]: I0130 14:03:24.018847 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" exitCode=0 Jan 30 14:03:24 crc kubenswrapper[5039]: I0130 14:03:24.018960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52"} Jan 30 14:03:25 crc kubenswrapper[5039]: I0130 14:03:25.030510 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerStarted","Data":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} Jan 30 14:03:25 crc kubenswrapper[5039]: I0130 14:03:25.056461 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cmtr4" podStartSLOduration=2.660398301 podStartE2EDuration="4.056441659s" podCreationTimestamp="2026-01-30 14:03:21 +0000 UTC" firstStartedPulling="2026-01-30 14:03:23.00812465 +0000 UTC m=+3567.668805877" lastFinishedPulling="2026-01-30 14:03:24.404167988 +0000 UTC m=+3569.064849235" observedRunningTime="2026-01-30 14:03:25.048982059 +0000 UTC m=+3569.709663296" watchObservedRunningTime="2026-01-30 14:03:25.056441659 +0000 UTC m=+3569.717122876" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.507909 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.508603 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.555397 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:32 crc kubenswrapper[5039]: I0130 14:03:32.139900 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:32 crc kubenswrapper[5039]: I0130 14:03:32.190453 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:34 crc kubenswrapper[5039]: I0130 14:03:34.106618 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cmtr4" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" containerID="cri-o://573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" gracePeriod=2 Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.078723 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125103 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" exitCode=0 Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125153 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"ff67be788ec0ec990437c11d1243ef4a96d6aad0d42af7424f9593662a6fd679"} Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125194 5039 scope.go:117] "RemoveContainer" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125312 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.144653 5039 scope.go:117] "RemoveContainer" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.162905 5039 scope.go:117] "RemoveContainer" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188303 5039 scope.go:117] "RemoveContainer" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.188889 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": container with ID starting with 573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d not found: ID does not exist" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188939 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} err="failed to get container status \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": rpc error: code = NotFound desc = could not find container \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": container with ID starting with 573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188963 5039 scope.go:117] "RemoveContainer" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.189324 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": container with ID starting with 2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52 not found: ID does not exist" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189353 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52"} err="failed to get container status \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": rpc error: code = NotFound desc = could not find container \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": container with ID starting with 2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52 not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189372 5039 scope.go:117] "RemoveContainer" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.189658 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": container with ID starting with 157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f not found: ID does not exist" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189713 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f"} err="failed to get container status \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": rpc error: code = NotFound desc = could not find container \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": container with ID starting with 157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209820 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209903 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.210780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities" (OuterVolumeSpecName: "utilities") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.216599 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt" (OuterVolumeSpecName: "kube-api-access-bsgxt") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "kube-api-access-bsgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.231978 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312193 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312242 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312255 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.459186 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.464446 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:36 crc kubenswrapper[5039]: I0130 14:03:36.102573 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" path="/var/lib/kubelet/pods/6ff48489-7d56-4b54-bffd-7ac291c03e1b/volumes" Jan 30 14:03:37 crc kubenswrapper[5039]: I0130 14:03:37.742637 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:37 crc kubenswrapper[5039]: I0130 14:03:37.743122 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.741950 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.742530 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.742570 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.743003 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.743089 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" gracePeriod=600 Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429391 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" exitCode=0 Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429773 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:04:09 crc kubenswrapper[5039]: I0130 14:04:09.439196 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} Jan 30 14:06:37 crc kubenswrapper[5039]: I0130 14:06:37.742134 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:37 crc kubenswrapper[5039]: I0130 14:06:37.742857 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:07 crc kubenswrapper[5039]: I0130 14:07:07.742160 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:07:07 crc kubenswrapper[5039]: I0130 14:07:07.742843 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742174 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742796 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742863 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.743581 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.743647 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" gracePeriod=600 Jan 30 14:07:37 crc kubenswrapper[5039]: E0130 14:07:37.864670 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870507 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" exitCode=0 Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870587 5039 scope.go:117] "RemoveContainer" containerID="7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.871249 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:07:37 crc kubenswrapper[5039]: E0130 14:07:37.871487 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:07:52 crc kubenswrapper[5039]: I0130 14:07:52.093987 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:07:52 crc kubenswrapper[5039]: E0130 14:07:52.094834 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:06 crc kubenswrapper[5039]: I0130 14:08:06.111663 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:06 crc kubenswrapper[5039]: E0130 14:08:06.112521 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:18 crc kubenswrapper[5039]: I0130 14:08:18.094344 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:18 crc kubenswrapper[5039]: E0130 14:08:18.095128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:31 crc kubenswrapper[5039]: I0130 14:08:31.094186 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:31 crc kubenswrapper[5039]: E0130 14:08:31.094923 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:45 crc kubenswrapper[5039]: I0130 14:08:45.092979 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:45 crc kubenswrapper[5039]: E0130 14:08:45.094407 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:59 crc kubenswrapper[5039]: I0130 14:08:59.093956 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:59 crc kubenswrapper[5039]: E0130 14:08:59.095311 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:11 crc kubenswrapper[5039]: I0130 14:09:11.094645 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:11 crc kubenswrapper[5039]: E0130 14:09:11.095504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:26 crc kubenswrapper[5039]: I0130 14:09:26.102892 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:26 crc kubenswrapper[5039]: E0130 14:09:26.126714 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:39 crc kubenswrapper[5039]: I0130 14:09:39.093887 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:39 crc kubenswrapper[5039]: E0130 14:09:39.094696 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:54 crc kubenswrapper[5039]: I0130 14:09:54.095537 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:54 crc kubenswrapper[5039]: E0130 14:09:54.096365 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:06 crc kubenswrapper[5039]: I0130 14:10:06.099429 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:06 crc kubenswrapper[5039]: E0130 14:10:06.100942 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:21 crc kubenswrapper[5039]: I0130 14:10:21.093485 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:21 crc kubenswrapper[5039]: E0130 14:10:21.094340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:32 crc kubenswrapper[5039]: I0130 14:10:32.093597 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:32 crc kubenswrapper[5039]: E0130 14:10:32.094557 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:44 crc kubenswrapper[5039]: I0130 14:10:44.093458 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:44 crc kubenswrapper[5039]: E0130 14:10:44.094099 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:59 crc kubenswrapper[5039]: I0130 14:10:59.093085 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:59 crc kubenswrapper[5039]: E0130 14:10:59.093952 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.325750 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326444 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-content" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326460 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-content" Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326492 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326501 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326515 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-utilities" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326526 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-utilities" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326679 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.327909 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.339820 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365254 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365483 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.467472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.467502 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.793629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.947870 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:01 crc kubenswrapper[5039]: I0130 14:11:01.477064 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.244796 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" exitCode=0 Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.244871 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3"} Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.245139 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"9eecaa171402aa701714aefd5baa938f4dc33b8c0296b583fac5e25547afa6ab"} Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.246834 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:11:03 crc kubenswrapper[5039]: I0130 14:11:03.256438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} Jan 30 14:11:04 crc kubenswrapper[5039]: I0130 14:11:04.265380 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" exitCode=0 Jan 30 14:11:04 crc kubenswrapper[5039]: I0130 14:11:04.265432 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} Jan 30 14:11:05 crc kubenswrapper[5039]: I0130 14:11:05.274355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} Jan 30 14:11:05 crc kubenswrapper[5039]: I0130 14:11:05.295921 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jxlns" podStartSLOduration=2.868579919 podStartE2EDuration="5.295905896s" podCreationTimestamp="2026-01-30 14:11:00 +0000 UTC" firstStartedPulling="2026-01-30 14:11:02.246552584 +0000 UTC m=+4026.907233811" lastFinishedPulling="2026-01-30 14:11:04.673878561 +0000 UTC m=+4029.334559788" observedRunningTime="2026-01-30 14:11:05.294293252 +0000 UTC m=+4029.954974479" watchObservedRunningTime="2026-01-30 14:11:05.295905896 +0000 UTC m=+4029.956587123" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.948102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.948739 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.988779 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:11 crc kubenswrapper[5039]: I0130 14:11:11.622835 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:11 crc kubenswrapper[5039]: I0130 14:11:11.666288 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.094074 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:13 crc kubenswrapper[5039]: E0130 14:11:13.094572 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.322718 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jxlns" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" containerID="cri-o://42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" gracePeriod=2 Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.705930 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.866926 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.867029 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.868191 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities" (OuterVolumeSpecName: "utilities") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.867135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.868978 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.872678 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp" (OuterVolumeSpecName: "kube-api-access-9hgnp") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "kube-api-access-9hgnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.969728 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.332897 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" exitCode=0 Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.332958 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.333040 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"9eecaa171402aa701714aefd5baa938f4dc33b8c0296b583fac5e25547afa6ab"} Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.333081 5039 scope.go:117] "RemoveContainer" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.334548 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.350803 5039 scope.go:117] "RemoveContainer" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.369101 5039 scope.go:117] "RemoveContainer" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.396533 5039 scope.go:117] "RemoveContainer" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.397004 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": container with ID starting with 42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21 not found: ID does not exist" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397130 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} err="failed to get container status \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": rpc error: code = NotFound desc = could not find container \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": container with ID starting with 42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21 not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397226 5039 scope.go:117] "RemoveContainer" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.397770 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": container with ID starting with 8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa not found: ID does not exist" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397813 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} err="failed to get container status \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": rpc error: code = NotFound desc = could not find container \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": container with ID starting with 8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397834 5039 scope.go:117] "RemoveContainer" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.398258 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": container with ID starting with 3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3 not found: ID does not exist" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.398283 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3"} err="failed to get container status \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": rpc error: code = NotFound desc = could not find container \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": container with ID starting with 3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3 not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.626977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.680645 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.685865 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.691869 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:16 crc kubenswrapper[5039]: I0130 14:11:16.117740 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" path="/var/lib/kubelet/pods/69c78615-cbf8-45c1-a9eb-06c248a1e4d4/volumes" Jan 30 14:11:24 crc kubenswrapper[5039]: I0130 14:11:24.094189 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:24 crc kubenswrapper[5039]: E0130 14:11:24.094902 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:36 crc kubenswrapper[5039]: I0130 14:11:36.097625 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:36 crc kubenswrapper[5039]: E0130 14:11:36.098410 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:50 crc kubenswrapper[5039]: I0130 14:11:50.093071 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:50 crc kubenswrapper[5039]: E0130 14:11:50.093872 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:05 crc kubenswrapper[5039]: I0130 14:12:05.094197 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:05 crc kubenswrapper[5039]: E0130 14:12:05.094975 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:17 crc kubenswrapper[5039]: I0130 14:12:17.093421 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:17 crc kubenswrapper[5039]: E0130 14:12:17.094365 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:30 crc kubenswrapper[5039]: I0130 14:12:30.094309 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:30 crc kubenswrapper[5039]: E0130 14:12:30.095005 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:42 crc kubenswrapper[5039]: I0130 14:12:42.094940 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:42 crc kubenswrapper[5039]: I0130 14:12:42.945393 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.785703 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786765 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-content" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786777 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-content" Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786795 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786800 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786815 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-utilities" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786822 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-utilities" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786943 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.789490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.798982 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821555 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821685 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.923187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.923208 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.944593 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:45 crc kubenswrapper[5039]: I0130 14:13:45.114963 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:45 crc kubenswrapper[5039]: I0130 14:13:45.560857 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.394871 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" exitCode=0 Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.394929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860"} Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.395168 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerStarted","Data":"6e3cde589ccda3e780c7f09fd5de8eac3d8a0280172441088e9ebc1a9b384744"} Jan 30 14:13:48 crc kubenswrapper[5039]: I0130 14:13:48.411762 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" exitCode=0 Jan 30 14:13:48 crc kubenswrapper[5039]: I0130 14:13:48.411873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c"} Jan 30 14:13:49 crc kubenswrapper[5039]: I0130 14:13:49.422372 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerStarted","Data":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} Jan 30 14:13:49 crc kubenswrapper[5039]: I0130 14:13:49.444246 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wl5lr" podStartSLOduration=3.045641296 podStartE2EDuration="5.444228416s" podCreationTimestamp="2026-01-30 14:13:44 +0000 UTC" firstStartedPulling="2026-01-30 14:13:46.397286304 +0000 UTC m=+4191.057967531" lastFinishedPulling="2026-01-30 14:13:48.795873424 +0000 UTC m=+4193.456554651" observedRunningTime="2026-01-30 14:13:49.438680876 +0000 UTC m=+4194.099362103" watchObservedRunningTime="2026-01-30 14:13:49.444228416 +0000 UTC m=+4194.104909643" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.116270 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.116905 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.168910 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.509954 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.561077 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:57 crc kubenswrapper[5039]: I0130 14:13:57.484586 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wl5lr" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" containerID="cri-o://1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" gracePeriod=2 Jan 30 14:13:57 crc kubenswrapper[5039]: I0130 14:13:57.855821 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.007797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.007922 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.008035 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.008736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities" (OuterVolumeSpecName: "utilities") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.013356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl" (OuterVolumeSpecName: "kube-api-access-hb5jl") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "kube-api-access-hb5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.068447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110258 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110300 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110313 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494606 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" exitCode=0 Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494644 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.495629 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"6e3cde589ccda3e780c7f09fd5de8eac3d8a0280172441088e9ebc1a9b384744"} Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.495710 5039 scope.go:117] "RemoveContainer" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.522165 5039 scope.go:117] "RemoveContainer" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.525475 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.535698 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.900185 5039 scope.go:117] "RemoveContainer" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.915965 5039 scope.go:117] "RemoveContainer" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.916562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": container with ID starting with 1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4 not found: ID does not exist" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.916636 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} err="failed to get container status \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": rpc error: code = NotFound desc = could not find container \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": container with ID starting with 1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4 not found: ID does not exist" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.916688 5039 scope.go:117] "RemoveContainer" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.917172 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": container with ID starting with 16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c not found: ID does not exist" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917237 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c"} err="failed to get container status \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": rpc error: code = NotFound desc = could not find container \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": container with ID starting with 16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c not found: ID does not exist" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917257 5039 scope.go:117] "RemoveContainer" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.917546 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": container with ID starting with bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860 not found: ID does not exist" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917600 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860"} err="failed to get container status \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": rpc error: code = NotFound desc = could not find container \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": container with ID starting with bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860 not found: ID does not exist" Jan 30 14:14:00 crc kubenswrapper[5039]: I0130 14:14:00.107884 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" path="/var/lib/kubelet/pods/4d546a6e-abe3-4799-a9d9-6b362490f31f/volumes" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.818280 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819268 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819318 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819336 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-content" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819345 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-content" Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819402 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-utilities" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819414 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-utilities" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819608 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.820826 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.837878 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.962533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.963174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.963332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065494 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065567 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.066240 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.066290 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.093255 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.147629 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.628340 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.533914 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561" exitCode=0 Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.533960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561"} Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.534004 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"96de905b47c06d2deae0f64d3a660ed1187032fda799126b46c4e56073c25310"} Jan 30 14:14:04 crc kubenswrapper[5039]: I0130 14:14:04.542542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d"} Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.549877 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d" exitCode=0 Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.549942 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d"} Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.813952 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.815882 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.825763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925307 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925405 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.026722 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027372 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027408 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027748 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.047965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.136096 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.558624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd"} Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.587605 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2qvl" podStartSLOduration=2.879652386 podStartE2EDuration="5.587575801s" podCreationTimestamp="2026-01-30 14:14:01 +0000 UTC" firstStartedPulling="2026-01-30 14:14:03.536059407 +0000 UTC m=+4208.196740634" lastFinishedPulling="2026-01-30 14:14:06.243982822 +0000 UTC m=+4210.904664049" observedRunningTime="2026-01-30 14:14:06.578891267 +0000 UTC m=+4211.239572504" watchObservedRunningTime="2026-01-30 14:14:06.587575801 +0000 UTC m=+4211.248257028" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.721572 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:06 crc kubenswrapper[5039]: W0130 14:14:06.725237 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715431d9_996c_4db9_9bc0_f7c5ecc04d89.slice/crio-87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef WatchSource:0}: Error finding container 87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef: Status 404 returned error can't find the container with id 87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.566593 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" exitCode=0 Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.566671 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e"} Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.567098 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef"} Jan 30 14:14:09 crc kubenswrapper[5039]: I0130 14:14:09.583437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} Jan 30 14:14:10 crc kubenswrapper[5039]: I0130 14:14:10.593076 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" exitCode=0 Jan 30 14:14:10 crc kubenswrapper[5039]: I0130 14:14:10.593125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} Jan 30 14:14:11 crc kubenswrapper[5039]: I0130 14:14:11.601724 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} Jan 30 14:14:11 crc kubenswrapper[5039]: I0130 14:14:11.636397 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrr26" podStartSLOduration=2.884354085 podStartE2EDuration="6.636369817s" podCreationTimestamp="2026-01-30 14:14:05 +0000 UTC" firstStartedPulling="2026-01-30 14:14:07.568034351 +0000 UTC m=+4212.228715578" lastFinishedPulling="2026-01-30 14:14:11.320050073 +0000 UTC m=+4215.980731310" observedRunningTime="2026-01-30 14:14:11.62015428 +0000 UTC m=+4216.280835517" watchObservedRunningTime="2026-01-30 14:14:11.636369817 +0000 UTC m=+4216.297051044" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.148557 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.148610 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.197672 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.650364 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.136046 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.136255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.181750 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.676711 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.005635 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.005971 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2qvl" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" containerID="cri-o://dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" gracePeriod=2 Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.647045 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" exitCode=0 Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.647069 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd"} Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.973331 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097191 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097754 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.099827 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities" (OuterVolumeSpecName: "utilities") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.104558 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks" (OuterVolumeSpecName: "kube-api-access-cf6ks") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "kube-api-access-cf6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.121714 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199183 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199249 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199262 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657593 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657577 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"96de905b47c06d2deae0f64d3a660ed1187032fda799126b46c4e56073c25310"} Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657822 5039 scope.go:117] "RemoveContainer" containerID="dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.691291 5039 scope.go:117] "RemoveContainer" containerID="787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.699839 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.706254 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.729738 5039 scope.go:117] "RemoveContainer" containerID="d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561" Jan 30 14:14:20 crc kubenswrapper[5039]: I0130 14:14:20.102316 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" path="/var/lib/kubelet/pods/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303/volumes" Jan 30 14:14:21 crc kubenswrapper[5039]: I0130 14:14:21.806666 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:21 crc kubenswrapper[5039]: I0130 14:14:21.807289 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrr26" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" containerID="cri-o://ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" gracePeriod=2 Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.255768 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.457970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.458218 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.458435 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.459662 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities" (OuterVolumeSpecName: "utilities") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.465373 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp" (OuterVolumeSpecName: "kube-api-access-m2hsp") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "kube-api-access-m2hsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.513897 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559527 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559567 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559577 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703765 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" exitCode=0 Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703839 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703837 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef"} Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703999 5039 scope.go:117] "RemoveContainer" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.732969 5039 scope.go:117] "RemoveContainer" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.748132 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.756143 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.764184 5039 scope.go:117] "RemoveContainer" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782376 5039 scope.go:117] "RemoveContainer" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.782805 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": container with ID starting with ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130 not found: ID does not exist" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782848 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} err="failed to get container status \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": rpc error: code = NotFound desc = could not find container \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": container with ID starting with ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130 not found: ID does not exist" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782883 5039 scope.go:117] "RemoveContainer" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.783522 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": container with ID starting with 965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f not found: ID does not exist" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783548 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} err="failed to get container status \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": rpc error: code = NotFound desc = could not find container \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": container with ID starting with 965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f not found: ID does not exist" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783562 5039 scope.go:117] "RemoveContainer" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.783881 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": container with ID starting with acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e not found: ID does not exist" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783901 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e"} err="failed to get container status \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": rpc error: code = NotFound desc = could not find container \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": container with ID starting with acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e not found: ID does not exist" Jan 30 14:14:24 crc kubenswrapper[5039]: I0130 14:14:24.101979 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" path="/var/lib/kubelet/pods/715431d9-996c-4db9-9bc0-f7c5ecc04d89/volumes" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.175613 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176545 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176558 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176577 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176604 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176621 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176629 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176639 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176645 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176654 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176660 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176809 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176826 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.177420 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.179941 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.179950 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.185467 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.327639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328078 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328113 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.335805 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.348243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.494291 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.762757 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: W0130 14:15:00.767882 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5223134_2341_42dd_adf1_79a2f6eb4d24.slice/crio-64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218 WatchSource:0}: Error finding container 64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218: Status 404 returned error can't find the container with id 64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218 Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.006600 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerStarted","Data":"8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c"} Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.006660 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerStarted","Data":"64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218"} Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.028238 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" podStartSLOduration=1.028211689 podStartE2EDuration="1.028211689s" podCreationTimestamp="2026-01-30 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:15:01.023000218 +0000 UTC m=+4265.683681475" watchObservedRunningTime="2026-01-30 14:15:01.028211689 +0000 UTC m=+4265.688892916" Jan 30 14:15:02 crc kubenswrapper[5039]: I0130 14:15:02.017049 5039 generic.go:334] "Generic (PLEG): container finished" podID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerID="8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c" exitCode=0 Jan 30 14:15:02 crc kubenswrapper[5039]: I0130 14:15:02.017377 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerDied","Data":"8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c"} Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.282053 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.371942 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372676 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.377040 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.377080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k" (OuterVolumeSpecName: "kube-api-access-nzp5k") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "kube-api-access-nzp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474449 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474509 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474528 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034586 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerDied","Data":"64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218"} Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034634 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034664 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.353382 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.358230 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 14:15:06 crc kubenswrapper[5039]: I0130 14:15:06.103706 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" path="/var/lib/kubelet/pods/c73af4d7-581b-4f6b-890c-74d614dc93fb/volumes" Jan 30 14:15:07 crc kubenswrapper[5039]: I0130 14:15:07.742395 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:15:07 crc kubenswrapper[5039]: I0130 14:15:07.742454 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:15:08 crc kubenswrapper[5039]: I0130 14:15:08.718165 5039 scope.go:117] "RemoveContainer" containerID="f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466" Jan 30 14:15:37 crc kubenswrapper[5039]: I0130 14:15:37.741835 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:15:37 crc kubenswrapper[5039]: I0130 14:15:37.742288 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.742809 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.743416 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.743469 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.744134 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.744183 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" gracePeriod=600 Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457149 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" exitCode=0 Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457165 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457521 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457544 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:18:37 crc kubenswrapper[5039]: I0130 14:18:37.742609 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:37 crc kubenswrapper[5039]: I0130 14:18:37.743193 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:07 crc kubenswrapper[5039]: I0130 14:19:07.741966 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:19:07 crc kubenswrapper[5039]: I0130 14:19:07.742652 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.742434 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.742990 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743059 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743636 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743690 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" gracePeriod=600 Jan 30 14:19:37 crc kubenswrapper[5039]: E0130 14:19:37.867232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949569 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" exitCode=0 Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949613 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949642 5039 scope.go:117] "RemoveContainer" containerID="3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.950096 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:19:37 crc kubenswrapper[5039]: E0130 14:19:37.950285 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.379431 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.385126 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.488622 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:46 crc kubenswrapper[5039]: E0130 14:19:46.488897 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.488909 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.489065 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.489539 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493301 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493586 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.496179 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.496180 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650143 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650336 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.751750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.752227 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753182 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753289 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753883 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.775627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.861709 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:47 crc kubenswrapper[5039]: I0130 14:19:47.452086 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:47 crc kubenswrapper[5039]: I0130 14:19:47.461914 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:19:48 crc kubenswrapper[5039]: I0130 14:19:48.026371 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerStarted","Data":"86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6"} Jan 30 14:19:48 crc kubenswrapper[5039]: I0130 14:19:48.101033 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" path="/var/lib/kubelet/pods/4a676a4d-a7f1-4312-9c94-3a548ecf60fe/volumes" Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.033745 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerID="561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18" exitCode=0 Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.033807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerDied","Data":"561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18"} Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.093864 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:19:49 crc kubenswrapper[5039]: E0130 14:19:49.094248 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.371684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505225 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505299 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505638 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.794350 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj" (OuterVolumeSpecName: "kube-api-access-fr8kj") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "kube-api-access-fr8kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.809267 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.834493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.910822 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045902 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerDied","Data":"86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6"} Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045944 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6" Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.386549 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.394049 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.506934 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:52 crc kubenswrapper[5039]: E0130 14:19:52.507404 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.507427 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.507721 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.508445 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.510934 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.511213 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.511333 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.512128 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.515726 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.632859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.633197 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.633385 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734519 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734797 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.735899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.752949 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.825785 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:53 crc kubenswrapper[5039]: I0130 14:19:53.236914 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.066933 5039 generic.go:334] "Generic (PLEG): container finished" podID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerID="ddd91ddb5a11354e503cca0e498290b6ecee56bda2176f7d68c76eac6d2ed007" exitCode=0 Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.067262 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerDied","Data":"ddd91ddb5a11354e503cca0e498290b6ecee56bda2176f7d68c76eac6d2ed007"} Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.067289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerStarted","Data":"4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8"} Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.100787 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" path="/var/lib/kubelet/pods/fe93b51e-cec9-4e00-afbd-bd258c3264e0/volumes" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.401725 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587824 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587846 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.588086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.588454 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.592139 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2" (OuterVolumeSpecName: "kube-api-access-z5ms2") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "kube-api-access-z5ms2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.615672 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.689243 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.689277 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080675 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerDied","Data":"4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8"} Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080720 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8" Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080724 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:20:00 crc kubenswrapper[5039]: I0130 14:20:00.093778 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:00 crc kubenswrapper[5039]: E0130 14:20:00.095972 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:08 crc kubenswrapper[5039]: I0130 14:20:08.835342 5039 scope.go:117] "RemoveContainer" containerID="57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164" Jan 30 14:20:14 crc kubenswrapper[5039]: I0130 14:20:14.093943 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:14 crc kubenswrapper[5039]: E0130 14:20:14.094736 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:26 crc kubenswrapper[5039]: I0130 14:20:26.097938 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:26 crc kubenswrapper[5039]: E0130 14:20:26.098822 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:37 crc kubenswrapper[5039]: I0130 14:20:37.094130 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:37 crc kubenswrapper[5039]: E0130 14:20:37.095410 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:51 crc kubenswrapper[5039]: I0130 14:20:51.093289 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:51 crc kubenswrapper[5039]: E0130 14:20:51.094166 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:02 crc kubenswrapper[5039]: I0130 14:21:02.093824 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:02 crc kubenswrapper[5039]: E0130 14:21:02.094621 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.254564 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:06 crc kubenswrapper[5039]: E0130 14:21:06.255180 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.255195 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.255387 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.256582 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.272768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.494973 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495064 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495137 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495895 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.524965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.592439 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.015774 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.553914 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11" exitCode=0 Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.553976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11"} Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.554254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"6b3f8f9f52fb990c7fe3ce5f613111b65bf4ba1244ac183a3125f0671d120478"} Jan 30 14:21:08 crc kubenswrapper[5039]: I0130 14:21:08.578235 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0"} Jan 30 14:21:09 crc kubenswrapper[5039]: I0130 14:21:09.585649 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0" exitCode=0 Jan 30 14:21:09 crc kubenswrapper[5039]: I0130 14:21:09.585698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0"} Jan 30 14:21:10 crc kubenswrapper[5039]: I0130 14:21:10.593600 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0"} Jan 30 14:21:10 crc kubenswrapper[5039]: I0130 14:21:10.612474 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdqz2" podStartSLOduration=2.189298737 podStartE2EDuration="4.612428778s" podCreationTimestamp="2026-01-30 14:21:06 +0000 UTC" firstStartedPulling="2026-01-30 14:21:07.55671552 +0000 UTC m=+4632.217396747" lastFinishedPulling="2026-01-30 14:21:09.979845561 +0000 UTC m=+4634.640526788" observedRunningTime="2026-01-30 14:21:10.609629902 +0000 UTC m=+4635.270311149" watchObservedRunningTime="2026-01-30 14:21:10.612428778 +0000 UTC m=+4635.273110005" Jan 30 14:21:14 crc kubenswrapper[5039]: I0130 14:21:14.093761 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:14 crc kubenswrapper[5039]: E0130 14:21:14.094320 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.593554 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.593994 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.638922 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.687655 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.868896 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:18 crc kubenswrapper[5039]: I0130 14:21:18.644865 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mdqz2" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" containerID="cri-o://05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" gracePeriod=2 Jan 30 14:21:19 crc kubenswrapper[5039]: I0130 14:21:19.655999 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" exitCode=0 Jan 30 14:21:19 crc kubenswrapper[5039]: I0130 14:21:19.656115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0"} Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.171345 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213551 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213606 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.214438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities" (OuterVolumeSpecName: "utilities") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.228318 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56" (OuterVolumeSpecName: "kube-api-access-bjk56") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "kube-api-access-bjk56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.315618 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.315659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.362551 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.416968 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.663663 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"6b3f8f9f52fb990c7fe3ce5f613111b65bf4ba1244ac183a3125f0671d120478"} Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.663713 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.664255 5039 scope.go:117] "RemoveContainer" containerID="05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.681687 5039 scope.go:117] "RemoveContainer" containerID="e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.694861 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.701645 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.709852 5039 scope.go:117] "RemoveContainer" containerID="83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11" Jan 30 14:21:22 crc kubenswrapper[5039]: I0130 14:21:22.103933 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" path="/var/lib/kubelet/pods/4d64ff2b-053f-40fe-991c-24478a9d72a0/volumes" Jan 30 14:21:28 crc kubenswrapper[5039]: I0130 14:21:28.094583 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:28 crc kubenswrapper[5039]: E0130 14:21:28.095335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:40 crc kubenswrapper[5039]: I0130 14:21:40.093371 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:40 crc kubenswrapper[5039]: E0130 14:21:40.094088 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:55 crc kubenswrapper[5039]: I0130 14:21:55.093499 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:55 crc kubenswrapper[5039]: E0130 14:21:55.094468 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:09 crc kubenswrapper[5039]: I0130 14:22:09.094226 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:09 crc kubenswrapper[5039]: E0130 14:22:09.095140 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:23 crc kubenswrapper[5039]: I0130 14:22:23.093414 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:23 crc kubenswrapper[5039]: E0130 14:22:23.094329 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:38 crc kubenswrapper[5039]: I0130 14:22:38.093612 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:38 crc kubenswrapper[5039]: E0130 14:22:38.094461 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:53 crc kubenswrapper[5039]: I0130 14:22:53.093761 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:53 crc kubenswrapper[5039]: E0130 14:22:53.094597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:06 crc kubenswrapper[5039]: I0130 14:23:06.097507 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:06 crc kubenswrapper[5039]: E0130 14:23:06.098248 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.093319 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.094779 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.498926 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499277 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-utilities" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-utilities" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499326 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499335 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499347 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-content" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499355 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-content" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499551 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.500443 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504165 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.505129 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7jn59" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.532921 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598127 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699657 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699701 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699787 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.700713 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.700766 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.734020 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.749805 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.751201 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.776205 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.823594 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902461 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005790 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005877 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.006868 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.007000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.032204 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.073513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.301003 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.312258 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:19 crc kubenswrapper[5039]: W0130 14:23:19.321995 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c4d2e20_0c88_42f6_a4cb_1c985b2158a5.slice/crio-8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f WatchSource:0}: Error finding container 8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f: Status 404 returned error can't find the container with id 8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.457402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerStarted","Data":"8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f"} Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.458343 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerStarted","Data":"6ac616881083272726fdea47fdd6278ddfa6884baf44c7032cf2f20c714df68f"} Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.628577 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.629954 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.631924 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632206 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632262 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632359 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.634839 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mm44m" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.653139 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722060 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722159 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722279 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722399 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824241 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824652 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824825 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824921 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825006 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825108 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825235 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825537 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825803 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.826383 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.829770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830242 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830316 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac2f1d5ca3e543cb3845245028281cdaadefac18f4e6998e62f0daa5633ce93d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830614 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.848069 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.867948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.926988 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.930030 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.931946 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932287 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932397 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4c5xq" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932453 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932736 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.953202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.955357 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028293 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028361 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028392 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028418 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028471 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028752 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.131572 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132051 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132090 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132121 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132263 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132354 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.133150 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.133769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.134573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.136846 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.136898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.137619 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.137666 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cf5d5edaa6a284483ff5c44eed0954ce6f7d9972fca3c37d987e5a01665bd04/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.138833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.139348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.165197 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.181530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.248317 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: W0130 14:23:20.399161 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f3e4de_d43f_449d_bf20_62332da1e661.slice/crio-f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86 WatchSource:0}: Error finding container f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86: Status 404 returned error can't find the container with id f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.403255 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.468691 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.471707 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" exitCode=0 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.471803 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.473614 5039 generic.go:334] "Generic (PLEG): container finished" podID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerID="67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea" exitCode=0 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.473653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.667438 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: W0130 14:23:20.669785 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d06d513_af8a_494d_9c55_10980cc0e84a.slice/crio-41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803 WatchSource:0}: Error finding container 41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803: Status 404 returned error can't find the container with id 41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.808974 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.810173 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.812685 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pm9vp" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.813161 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.813330 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.814998 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.816220 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.823002 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946329 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946396 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946496 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946520 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946674 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946768 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048581 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048728 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049763 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049987 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.051136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.054546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055102 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055691 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055726 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/87a34719d43f7d0aece74f23afcf1eb1eede02c94cecb5350630d42184c71c2e/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.070101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.127669 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.129306 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.132905 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gxxtc" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.133266 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.143859 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.214897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251495 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251556 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251582 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353457 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353534 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.354417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.354671 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.369317 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.435516 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.455461 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.491021 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerStarted","Data":"3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.491897 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.493374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.495504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerStarted","Data":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.496128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.519865 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" podStartSLOduration=3.519847562 podStartE2EDuration="3.519847562s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:21.517312014 +0000 UTC m=+4766.177993241" watchObservedRunningTime="2026-01-30 14:23:21.519847562 +0000 UTC m=+4766.180528809" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.544606 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" podStartSLOduration=3.544590268 podStartE2EDuration="3.544590268s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:21.540061366 +0000 UTC m=+4766.200742623" watchObservedRunningTime="2026-01-30 14:23:21.544590268 +0000 UTC m=+4766.205271495" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.896864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.940422 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: W0130 14:23:21.945859 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54eb6d65_3d1f_4965_9438_a1c1c386747f.slice/crio-731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b WatchSource:0}: Error finding container 731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b: Status 404 returned error can't find the container with id 731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.205547 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.207237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.212555 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.213375 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.215472 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6bd4p" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.215578 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.223931 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375462 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375581 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375646 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375791 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375807 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476861 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476910 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476965 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476982 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477973 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478465 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478549 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.481405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.481499 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.486040 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.486072 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3697e47cd28b530533281b4e565c9018fb36f793eb2bbb6bf3520107516295a7/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.492476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.507144 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.507213 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"bfdb785a985cdd9783714339c4fcb5626c28cb1f140d77ae592d05bb5e14c3e1"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.509903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512214 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512690 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"54eb6d65-3d1f-4965-9438-a1c1c386747f","Type":"ContainerStarted","Data":"3397db9c0e9f6a09f03b6df19ac3cd78a2b56587ad55f631276d48c4fc12c55d"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512736 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"54eb6d65-3d1f-4965-9438-a1c1c386747f","Type":"ContainerStarted","Data":"731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512783 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.514708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.523633 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.587837 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.587814534 podStartE2EDuration="1.587814534s" podCreationTimestamp="2026-01-30 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:22.572371939 +0000 UTC m=+4767.233053166" watchObservedRunningTime="2026-01-30 14:23:22.587814534 +0000 UTC m=+4767.248495781" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.941616 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:23 crc kubenswrapper[5039]: I0130 14:23:23.526427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c"} Jan 30 14:23:23 crc kubenswrapper[5039]: I0130 14:23:23.526947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"00ca53fead4c7ca9c8d61140f51e5e98b0421b8ab12f45dc72e4e47eab851fe1"} Jan 30 14:23:25 crc kubenswrapper[5039]: I0130 14:23:25.542923 5039 generic.go:334] "Generic (PLEG): container finished" podID="bf30efc1-9347-4142-91ce-e1d5cfdd6d4b" containerID="c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77" exitCode=0 Jan 30 14:23:25 crc kubenswrapper[5039]: I0130 14:23:25.543044 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerDied","Data":"c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77"} Jan 30 14:23:26 crc kubenswrapper[5039]: I0130 14:23:26.550963 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"eacc9f26c18bd6e6f2ca97362292927e14136e1dc47e720b01563fb8d41cbc71"} Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.561059 5039 generic.go:334] "Generic (PLEG): container finished" podID="69580ad6-7c20-414c-8d6e-0aef5786bc7e" containerID="54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c" exitCode=0 Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.561144 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerDied","Data":"54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c"} Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.592314 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.59217509 podStartE2EDuration="8.59217509s" podCreationTimestamp="2026-01-30 14:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:26.578501148 +0000 UTC m=+4771.239182395" watchObservedRunningTime="2026-01-30 14:23:27.59217509 +0000 UTC m=+4772.252856337" Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.570256 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"670e5c0d8b3f93f6c6189bd310e9e89df139a248e6f681c12376fcf110b7893d"} Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.591201 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.5911818669999995 podStartE2EDuration="7.591181867s" podCreationTimestamp="2026-01-30 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:28.589941464 +0000 UTC m=+4773.250622691" watchObservedRunningTime="2026-01-30 14:23:28.591181867 +0000 UTC m=+4773.251863094" Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.826276 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.075357 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.122449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.576274 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" containerID="cri-o://f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" gracePeriod=10 Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.063468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.204826 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.204943 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.205085 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.211672 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2" (OuterVolumeSpecName: "kube-api-access-xjtv2") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "kube-api-access-xjtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.245525 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config" (OuterVolumeSpecName: "config") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.261485 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307642 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307679 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307689 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598782 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" exitCode=0 Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.599167 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f"} Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.599226 5039 scope.go:117] "RemoveContainer" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598868 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.628741 5039 scope.go:117] "RemoveContainer" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.630970 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.637110 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.644919 5039 scope.go:117] "RemoveContainer" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: E0130 14:23:30.645329 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": container with ID starting with f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a not found: ID does not exist" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645368 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} err="failed to get container status \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": rpc error: code = NotFound desc = could not find container \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": container with ID starting with f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a not found: ID does not exist" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645394 5039 scope.go:117] "RemoveContainer" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: E0130 14:23:30.645674 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": container with ID starting with a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d not found: ID does not exist" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645696 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d"} err="failed to get container status \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": rpc error: code = NotFound desc = could not find container \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": container with ID starting with a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d not found: ID does not exist" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.435890 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.435967 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.456839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.093198 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:32 crc kubenswrapper[5039]: E0130 14:23:32.093843 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.104850 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" path="/var/lib/kubelet/pods/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5/volumes" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.523816 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.524339 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.597849 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.676210 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:33 crc kubenswrapper[5039]: I0130 14:23:33.744180 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 14:23:33 crc kubenswrapper[5039]: I0130 14:23:33.828555 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.781462 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:39 crc kubenswrapper[5039]: E0130 14:23:39.782391 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782410 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: E0130 14:23:39.782430 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="init" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782438 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="init" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782656 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.783160 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.786443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.795539 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.959293 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.959405 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.061163 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.061323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.062303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.083860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.098381 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.564173 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:40 crc kubenswrapper[5039]: W0130 14:23:40.565376 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be05350_b9ae_4e79_8638_0eb0204460f6.slice/crio-0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b WatchSource:0}: Error finding container 0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b: Status 404 returned error can't find the container with id 0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.679522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerStarted","Data":"0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b"} Jan 30 14:23:41 crc kubenswrapper[5039]: I0130 14:23:41.687424 5039 generic.go:334] "Generic (PLEG): container finished" podID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerID="6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f" exitCode=0 Jan 30 14:23:41 crc kubenswrapper[5039]: I0130 14:23:41.687515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerDied","Data":"6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f"} Jan 30 14:23:42 crc kubenswrapper[5039]: I0130 14:23:42.966156 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.107621 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"1be05350-b9ae-4e79-8638-0eb0204460f6\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.107725 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"1be05350-b9ae-4e79-8638-0eb0204460f6\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.108439 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1be05350-b9ae-4e79-8638-0eb0204460f6" (UID: "1be05350-b9ae-4e79-8638-0eb0204460f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.108708 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.112739 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb" (OuterVolumeSpecName: "kube-api-access-vbnwb") pod "1be05350-b9ae-4e79-8638-0eb0204460f6" (UID: "1be05350-b9ae-4e79-8638-0eb0204460f6"). InnerVolumeSpecName "kube-api-access-vbnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.210556 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711605 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerDied","Data":"0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b"} Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711652 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711670 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:46 crc kubenswrapper[5039]: I0130 14:23:46.178576 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:46 crc kubenswrapper[5039]: I0130 14:23:46.188271 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:47 crc kubenswrapper[5039]: I0130 14:23:47.093336 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:47 crc kubenswrapper[5039]: E0130 14:23:47.093540 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:48 crc kubenswrapper[5039]: I0130 14:23:48.105042 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" path="/var/lib/kubelet/pods/1be05350-b9ae-4e79-8638-0eb0204460f6/volumes" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178134 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:51 crc kubenswrapper[5039]: E0130 14:23:51.178767 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178783 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178994 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.179603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.182189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.186684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.239877 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.240003 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.340992 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.341153 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.341956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.363077 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.504494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.927308 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777317 5039 generic.go:334] "Generic (PLEG): container finished" podID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerID="8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95" exitCode=0 Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777426 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerDied","Data":"8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95"} Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerStarted","Data":"a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f"} Jan 30 14:23:53 crc kubenswrapper[5039]: I0130 14:23:53.786660 5039 generic.go:334] "Generic (PLEG): container finished" podID="03f3e4de-d43f-449d-bf20-62332da1e661" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" exitCode=0 Jan 30 14:23:53 crc kubenswrapper[5039]: I0130 14:23:53.786880 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.082689 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.186971 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.187302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.188421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a71a921-7519-4576-8fa4-c4d16d4a1cde" (UID: "5a71a921-7519-4576-8fa4-c4d16d4a1cde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.194245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg" (OuterVolumeSpecName: "kube-api-access-gwbfg") pod "5a71a921-7519-4576-8fa4-c4d16d4a1cde" (UID: "5a71a921-7519-4576-8fa4-c4d16d4a1cde"). InnerVolumeSpecName "kube-api-access-gwbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.289355 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.289407 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerDied","Data":"a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794976 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794983 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.796861 5039 generic.go:334] "Generic (PLEG): container finished" podID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" exitCode=0 Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.796908 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.800865 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.801069 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.911547 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.9115292 podStartE2EDuration="36.9115292s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:54.905699893 +0000 UTC m=+4799.566381130" watchObservedRunningTime="2026-01-30 14:23:54.9115292 +0000 UTC m=+4799.572210427" Jan 30 14:23:55 crc kubenswrapper[5039]: I0130 14:23:55.811642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} Jan 30 14:23:55 crc kubenswrapper[5039]: I0130 14:23:55.812571 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:01 crc kubenswrapper[5039]: I0130 14:24:01.093589 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:01 crc kubenswrapper[5039]: E0130 14:24:01.094650 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:09 crc kubenswrapper[5039]: I0130 14:24:09.956262 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:24:10 crc kubenswrapper[5039]: I0130 14:24:10.005180 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.005148784 podStartE2EDuration="52.005148784s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:55.84560919 +0000 UTC m=+4800.506290437" watchObservedRunningTime="2026-01-30 14:24:10.005148784 +0000 UTC m=+4814.665830101" Jan 30 14:24:10 crc kubenswrapper[5039]: I0130 14:24:10.252379 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:12 crc kubenswrapper[5039]: I0130 14:24:12.094904 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:12 crc kubenswrapper[5039]: E0130 14:24:12.095567 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.480902 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:14 crc kubenswrapper[5039]: E0130 14:24:14.482654 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.482758 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.483029 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.484110 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.494154 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.622811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.622943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.623082 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724481 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724651 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.725359 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.725660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.757145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.807170 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.252542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.370800 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982531 5039 generic.go:334] "Generic (PLEG): container finished" podID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" exitCode=0 Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982572 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7"} Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982617 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerStarted","Data":"c95043f7ef80939f8ed4554811f0455bbc8df47a568054dd1add5edff0ec3f7d"} Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.161849 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.991760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerStarted","Data":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.993125 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.012374 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" podStartSLOduration=3.01235743 podStartE2EDuration="3.01235743s" podCreationTimestamp="2026-01-30 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:24:17.009877274 +0000 UTC m=+4821.670558491" watchObservedRunningTime="2026-01-30 14:24:17.01235743 +0000 UTC m=+4821.673038647" Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.250253 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" containerID="cri-o://1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" gracePeriod=604799 Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.945604 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" containerID="cri-o://d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" gracePeriod=604799 Jan 30 14:24:19 crc kubenswrapper[5039]: I0130 14:24:19.953952 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5672: connect: connection refused" Jan 30 14:24:20 crc kubenswrapper[5039]: I0130 14:24:20.249724 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.246:5672: connect: connection refused" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.865894 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972156 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972221 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972346 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972371 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972655 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972714 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972775 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.978666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.978728 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.979766 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.983099 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct" (OuterVolumeSpecName: "kube-api-access-sg9ct") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "kube-api-access-sg9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.983270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info" (OuterVolumeSpecName: "pod-info") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.987187 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.990527 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645" (OuterVolumeSpecName: "persistence") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.007410 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf" (OuterVolumeSpecName: "server-conf") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.060447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061123 5039 generic.go:334] "Generic (PLEG): container finished" podID="03f3e4de-d43f-449d-bf20-62332da1e661" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" exitCode=0 Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86"} Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061252 5039 scope.go:117] "RemoveContainer" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061275 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077583 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077642 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077656 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077670 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077682 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077697 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077711 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077785 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") on node \"crc\" " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077804 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.107814 5039 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.108405 5039 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645") on node "crc" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.194654 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.195644 5039 reconciler_common.go:293] "Volume detached for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.203004 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.226785 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.227335 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="setup-container" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227372 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="setup-container" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.227415 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227428 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227679 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.229040 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.230689 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mm44m" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.230903 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231229 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231248 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231760 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.251636 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.262451 5039 scope.go:117] "RemoveContainer" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.282214 5039 scope.go:117] "RemoveContainer" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.283341 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": container with ID starting with 1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2 not found: ID does not exist" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.283505 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} err="failed to get container status \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": rpc error: code = NotFound desc = could not find container \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": container with ID starting with 1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2 not found: ID does not exist" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.283674 5039 scope.go:117] "RemoveContainer" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.286269 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": container with ID starting with b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a not found: ID does not exist" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.286315 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} err="failed to get container status \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": rpc error: code = NotFound desc = could not find container \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": container with ID starting with b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a not found: ID does not exist" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.297959 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.298432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.298838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299080 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299193 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299491 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401457 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401521 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401803 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401829 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401920 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401945 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.402495 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.402826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.403168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.405457 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.405971 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.406000 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac2f1d5ca3e543cb3845245028281cdaadefac18f4e6998e62f0daa5633ce93d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.407540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.407670 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.409126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.420379 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.446256 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.505654 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.560278 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605290 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605316 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605358 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605378 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605413 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605634 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.606421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.606700 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.609323 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.614369 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info" (OuterVolumeSpecName: "pod-info") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.614522 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.618717 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z" (OuterVolumeSpecName: "kube-api-access-8m84z") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "kube-api-access-8m84z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.628209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43" (OuterVolumeSpecName: "persistence") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "pvc-9f8adc66-ad40-4c61-aaec-b1545735af43". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.649172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf" (OuterVolumeSpecName: "server-conf") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707376 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") on node \"crc\" " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707416 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707430 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707443 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707455 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707466 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707478 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707489 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.735962 5039 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.736165 5039 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9f8adc66-ad40-4c61-aaec-b1545735af43" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43") on node "crc" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.751149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809002 5039 reconciler_common.go:293] "Volume detached for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809387 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809274 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.869487 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.869732 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" containerID="cri-o://3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" gracePeriod=10 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.044486 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: W0130 14:24:25.085348 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd529e342_1b61_41e6_a1f7_a08a43d53dab.slice/crio-23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b WatchSource:0}: Error finding container 23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b: Status 404 returned error can't find the container with id 23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.095967 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.096149 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.107605 5039 generic.go:334] "Generic (PLEG): container finished" podID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" exitCode=0 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.107773 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108232 5039 scope.go:117] "RemoveContainer" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.149285 5039 generic.go:334] "Generic (PLEG): container finished" podID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerID="3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" exitCode=0 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.149344 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.168186 5039 scope.go:117] "RemoveContainer" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.179857 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.184216 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.251021 5039 scope.go:117] "RemoveContainer" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.254845 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": container with ID starting with d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156 not found: ID does not exist" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.254888 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} err="failed to get container status \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": rpc error: code = NotFound desc = could not find container \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": container with ID starting with d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156 not found: ID does not exist" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.254912 5039 scope.go:117] "RemoveContainer" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255637 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.255948 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="setup-container" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255966 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="setup-container" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.255973 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.256170 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.256937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.258242 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": container with ID starting with b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd not found: ID does not exist" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.258262 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} err="failed to get container status \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": rpc error: code = NotFound desc = could not find container \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": container with ID starting with b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd not found: ID does not exist" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.261534 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.261798 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.262118 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4c5xq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.262301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.272121 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.312629 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317493 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317571 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317596 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317613 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317633 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.418995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420259 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420290 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420318 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.421205 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.422347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.422637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.423477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.424967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.426888 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.426923 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cf5d5edaa6a284483ff5c44eed0954ce6f7d9972fca3c37d987e5a01665bd04/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.427169 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.428360 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.440329 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.453502 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.456480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521766 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521938 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.526425 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss" (OuterVolumeSpecName: "kube-api-access-4kfss") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "kube-api-access-4kfss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.553426 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config" (OuterVolumeSpecName: "config") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.555464 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.581974 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623608 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623649 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623666 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.814710 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: W0130 14:24:25.817984 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6342982f_d092_4d6d_bb77_1ce4083bec47.slice/crio-4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521 WatchSource:0}: Error finding container 4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521: Status 404 returned error can't find the container with id 4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521 Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.108861 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" path="/var/lib/kubelet/pods/03f3e4de-d43f-449d-bf20-62332da1e661/volumes" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.109967 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" path="/var/lib/kubelet/pods/3d06d513-af8a-494d-9c55-10980cc0e84a/volumes" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.158787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.159936 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162167 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"6ac616881083272726fdea47fdd6278ddfa6884baf44c7032cf2f20c714df68f"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162257 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162270 5039 scope.go:117] "RemoveContainer" containerID="3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.189285 5039 scope.go:117] "RemoveContainer" containerID="67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.197899 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.211501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:27 crc kubenswrapper[5039]: I0130 14:24:27.177373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1"} Jan 30 14:24:27 crc kubenswrapper[5039]: I0130 14:24:27.181831 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f"} Jan 30 14:24:28 crc kubenswrapper[5039]: I0130 14:24:28.104777 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" path="/var/lib/kubelet/pods/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8/volumes" Jan 30 14:24:40 crc kubenswrapper[5039]: I0130 14:24:40.094434 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:41 crc kubenswrapper[5039]: I0130 14:24:41.303741 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.451148 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:52 crc kubenswrapper[5039]: E0130 14:24:52.453322 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="init" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="init" Jan 30 14:24:52 crc kubenswrapper[5039]: E0130 14:24:52.453528 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453606 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453869 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.455287 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.468214 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.554481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.555095 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.555146 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656339 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656537 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656960 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656978 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.677391 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.781874 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:53 crc kubenswrapper[5039]: I0130 14:24:53.291895 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:53 crc kubenswrapper[5039]: I0130 14:24:53.398384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerStarted","Data":"141812871acb285df5f3457cc9109ef52a46d0d7be7d5e9bba8b031c78ef0272"} Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.406121 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" exitCode=0 Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.406209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca"} Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.407934 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:24:56 crc kubenswrapper[5039]: I0130 14:24:56.440304 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" exitCode=0 Jan 30 14:24:56 crc kubenswrapper[5039]: I0130 14:24:56.441916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe"} Jan 30 14:24:57 crc kubenswrapper[5039]: I0130 14:24:57.448941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerStarted","Data":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} Jan 30 14:24:57 crc kubenswrapper[5039]: I0130 14:24:57.475136 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7lqw" podStartSLOduration=3.019271543 podStartE2EDuration="5.475118227s" podCreationTimestamp="2026-01-30 14:24:52 +0000 UTC" firstStartedPulling="2026-01-30 14:24:54.407634441 +0000 UTC m=+4859.068315668" lastFinishedPulling="2026-01-30 14:24:56.863481125 +0000 UTC m=+4861.524162352" observedRunningTime="2026-01-30 14:24:57.466490374 +0000 UTC m=+4862.127171621" watchObservedRunningTime="2026-01-30 14:24:57.475118227 +0000 UTC m=+4862.135799454" Jan 30 14:24:59 crc kubenswrapper[5039]: I0130 14:24:59.464094 5039 generic.go:334] "Generic (PLEG): container finished" podID="d529e342-1b61-41e6-a1f7-a08a43d53dab" containerID="4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1" exitCode=0 Jan 30 14:24:59 crc kubenswrapper[5039]: I0130 14:24:59.464225 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerDied","Data":"4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.472349 5039 generic.go:334] "Generic (PLEG): container finished" podID="6342982f-d092-4d6d-bb77-1ce4083bec47" containerID="c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f" exitCode=0 Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.472461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerDied","Data":"c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.475842 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"85e9cfee2995aad0765d2f456c29e23be9eb746dd5a12bcf62509b4132171460"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.476100 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.522480 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.522458611 podStartE2EDuration="36.522458611s" podCreationTimestamp="2026-01-30 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:00.52239336 +0000 UTC m=+4865.183074587" watchObservedRunningTime="2026-01-30 14:25:00.522458611 +0000 UTC m=+4865.183139838" Jan 30 14:25:01 crc kubenswrapper[5039]: I0130 14:25:01.485495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"47a4b9eea850978f17176034c02e10316c4127de765bd90b7690b9d3c7fdbdb0"} Jan 30 14:25:01 crc kubenswrapper[5039]: I0130 14:25:01.486070 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.782324 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.782608 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.827038 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.848139 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.848121003 podStartE2EDuration="37.848121003s" podCreationTimestamp="2026-01-30 14:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:01.510423881 +0000 UTC m=+4866.171105118" watchObservedRunningTime="2026-01-30 14:25:02.848121003 +0000 UTC m=+4867.508802250" Jan 30 14:25:03 crc kubenswrapper[5039]: I0130 14:25:03.541881 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:03 crc kubenswrapper[5039]: I0130 14:25:03.589414 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:05 crc kubenswrapper[5039]: I0130 14:25:05.514712 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7lqw" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" containerID="cri-o://b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" gracePeriod=2 Jan 30 14:25:05 crc kubenswrapper[5039]: I0130 14:25:05.953180 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059680 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059807 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059855 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.060788 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities" (OuterVolumeSpecName: "utilities") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.065626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s" (OuterVolumeSpecName: "kube-api-access-7j28s") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "kube-api-access-7j28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.111491 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161177 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161215 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161224 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524264 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" exitCode=0 Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524308 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524340 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"141812871acb285df5f3457cc9109ef52a46d0d7be7d5e9bba8b031c78ef0272"} Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524357 5039 scope.go:117] "RemoveContainer" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524356 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.546572 5039 scope.go:117] "RemoveContainer" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.566139 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.571986 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.590182 5039 scope.go:117] "RemoveContainer" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619497 5039 scope.go:117] "RemoveContainer" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.619900 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": container with ID starting with b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef not found: ID does not exist" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619943 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} err="failed to get container status \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": rpc error: code = NotFound desc = could not find container \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": container with ID starting with b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef not found: ID does not exist" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619967 5039 scope.go:117] "RemoveContainer" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.620421 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": container with ID starting with feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe not found: ID does not exist" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620475 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe"} err="failed to get container status \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": rpc error: code = NotFound desc = could not find container \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": container with ID starting with feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe not found: ID does not exist" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620497 5039 scope.go:117] "RemoveContainer" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.620966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": container with ID starting with 966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca not found: ID does not exist" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620993 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca"} err="failed to get container status \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": rpc error: code = NotFound desc = could not find container \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": container with ID starting with 966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca not found: ID does not exist" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.103711 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" path="/var/lib/kubelet/pods/bc3985ad-61d2-4e40-9bca-47cbed355387/volumes" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469057 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469423 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469444 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469459 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-utilities" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469467 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-utilities" Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469491 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-content" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469498 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-content" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.470959 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.484962 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493282 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493391 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594300 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594361 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594391 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594933 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594969 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.614902 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.790233 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:09 crc kubenswrapper[5039]: I0130 14:25:09.984496 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:09 crc kubenswrapper[5039]: W0130 14:25:09.993924 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac189ca9_2607_4f4e_a572_0e2ac5bf2c25.slice/crio-af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f WatchSource:0}: Error finding container af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f: Status 404 returned error can't find the container with id af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.565659 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" exitCode=0 Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.566054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c"} Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.566143 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f"} Jan 30 14:25:11 crc kubenswrapper[5039]: I0130 14:25:11.576070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} Jan 30 14:25:12 crc kubenswrapper[5039]: I0130 14:25:12.586090 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" exitCode=0 Jan 30 14:25:12 crc kubenswrapper[5039]: I0130 14:25:12.586170 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} Jan 30 14:25:13 crc kubenswrapper[5039]: I0130 14:25:13.595864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} Jan 30 14:25:13 crc kubenswrapper[5039]: I0130 14:25:13.618082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5jc6" podStartSLOduration=3.195164182 podStartE2EDuration="5.618058621s" podCreationTimestamp="2026-01-30 14:25:08 +0000 UTC" firstStartedPulling="2026-01-30 14:25:10.568102896 +0000 UTC m=+4875.228784123" lastFinishedPulling="2026-01-30 14:25:12.990997335 +0000 UTC m=+4877.651678562" observedRunningTime="2026-01-30 14:25:13.610309602 +0000 UTC m=+4878.270990839" watchObservedRunningTime="2026-01-30 14:25:13.618058621 +0000 UTC m=+4878.278739858" Jan 30 14:25:14 crc kubenswrapper[5039]: I0130 14:25:14.564300 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:25:15 crc kubenswrapper[5039]: I0130 14:25:15.585278 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.790629 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.791844 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.834155 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:19 crc kubenswrapper[5039]: I0130 14:25:19.693833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:19 crc kubenswrapper[5039]: I0130 14:25:19.744535 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:21 crc kubenswrapper[5039]: I0130 14:25:21.665215 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5jc6" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" containerID="cri-o://dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" gracePeriod=2 Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.213913 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297338 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297456 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297590 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.298142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities" (OuterVolumeSpecName: "utilities") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.302097 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll" (OuterVolumeSpecName: "kube-api-access-svnll") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "kube-api-access-svnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.353329 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399525 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399576 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399589 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.677938 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" exitCode=0 Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678143 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f"} Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678595 5039 scope.go:117] "RemoveContainer" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.700854 5039 scope.go:117] "RemoveContainer" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.708670 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.714947 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.742917 5039 scope.go:117] "RemoveContainer" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767185 5039 scope.go:117] "RemoveContainer" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.767686 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": container with ID starting with dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4 not found: ID does not exist" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767721 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} err="failed to get container status \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": rpc error: code = NotFound desc = could not find container \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": container with ID starting with dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4 not found: ID does not exist" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767740 5039 scope.go:117] "RemoveContainer" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.768110 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": container with ID starting with 7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d not found: ID does not exist" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} err="failed to get container status \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": rpc error: code = NotFound desc = could not find container \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": container with ID starting with 7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d not found: ID does not exist" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768305 5039 scope.go:117] "RemoveContainer" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.768731 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": container with ID starting with 2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c not found: ID does not exist" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768832 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c"} err="failed to get container status \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": rpc error: code = NotFound desc = could not find container \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": container with ID starting with 2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c not found: ID does not exist" Jan 30 14:25:24 crc kubenswrapper[5039]: I0130 14:25:24.105140 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" path="/var/lib/kubelet/pods/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25/volumes" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.477415 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478467 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478492 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478560 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-utilities" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478573 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-utilities" Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478596 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-content" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478608 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-content" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478852 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.479699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.482341 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47rtz" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.486970 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.565803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.667257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.698042 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.810039 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.322246 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:27 crc kubenswrapper[5039]: W0130 14:25:27.325240 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9904d62e_243b_4b88_b712_dbfd4154af6f.slice/crio-5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e WatchSource:0}: Error finding container 5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e: Status 404 returned error can't find the container with id 5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.718124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerStarted","Data":"8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e"} Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.718630 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerStarted","Data":"5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e"} Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.734624 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.734601997 podStartE2EDuration="1.734601997s" podCreationTimestamp="2026-01-30 14:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:27.729197372 +0000 UTC m=+4892.389878619" watchObservedRunningTime="2026-01-30 14:25:27.734601997 +0000 UTC m=+4892.395283224" Jan 30 14:25:39 crc kubenswrapper[5039]: E0130 14:25:39.086896 5039 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:52722->38.102.83.188:34017: write tcp 38.102.83.188:52722->38.102.83.188:34017: write: broken pipe Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.597411 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.597881 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" containerID="cri-o://8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" gracePeriod=30 Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.832350 5039 generic.go:334] "Generic (PLEG): container finished" podID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerID="8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" exitCode=143 Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.832482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerDied","Data":"8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e"} Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.029531 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.132204 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"9904d62e-243b-4b88-b712-dbfd4154af6f\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.141283 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44" (OuterVolumeSpecName: "kube-api-access-jhz44") pod "9904d62e-243b-4b88-b712-dbfd4154af6f" (UID: "9904d62e-243b-4b88-b712-dbfd4154af6f"). InnerVolumeSpecName "kube-api-access-jhz44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.234636 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerDied","Data":"5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e"} Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844179 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844398 5039 scope.go:117] "RemoveContainer" containerID="8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.885916 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.892784 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:44 crc kubenswrapper[5039]: I0130 14:25:44.112704 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" path="/var/lib/kubelet/pods/9904d62e-243b-4b88-b712-dbfd4154af6f/volumes" Jan 30 14:26:09 crc kubenswrapper[5039]: I0130 14:26:09.025501 5039 scope.go:117] "RemoveContainer" containerID="561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18" Jan 30 14:27:07 crc kubenswrapper[5039]: I0130 14:27:07.742655 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:27:07 crc kubenswrapper[5039]: I0130 14:27:07.743195 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:27:37 crc kubenswrapper[5039]: I0130 14:27:37.742173 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:27:37 crc kubenswrapper[5039]: I0130 14:27:37.742788 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.742285 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.742942 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743000 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743683 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743759 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" gracePeriod=600 Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.946904 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" exitCode=0 Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.946975 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.947272 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:28:08 crc kubenswrapper[5039]: I0130 14:28:08.962592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.382163 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:47 crc kubenswrapper[5039]: E0130 14:29:47.383038 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.383054 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.383248 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.384559 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.393588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.535619 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.536033 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.536124 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637798 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637869 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.638488 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.638706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.660491 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.717580 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.196374 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.687798 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" exitCode=0 Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.687979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f"} Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.688259 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerStarted","Data":"88bcde481d648d5c5e2199857e3a9e11082446b8b45872408a3697133f32701b"} Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.710947 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" exitCode=0 Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.712168 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f"} Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.886914 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.888703 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.893152 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47rtz" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.895608 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.990066 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.990118 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.091704 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.091789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.094472 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.094523 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa10d350335284513035fe335a8208c92d8dd24527e66499ce06078487f02b72/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.111615 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.123209 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.214869 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.721989 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerStarted","Data":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.744138 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4xkk" podStartSLOduration=2.127129106 podStartE2EDuration="4.744116228s" podCreationTimestamp="2026-01-30 14:29:47 +0000 UTC" firstStartedPulling="2026-01-30 14:29:48.690948167 +0000 UTC m=+5153.351629394" lastFinishedPulling="2026-01-30 14:29:51.307935289 +0000 UTC m=+5155.968616516" observedRunningTime="2026-01-30 14:29:51.740183012 +0000 UTC m=+5156.400864269" watchObservedRunningTime="2026-01-30 14:29:51.744116228 +0000 UTC m=+5156.404797455" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.770569 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:51 crc kubenswrapper[5039]: W0130 14:29:51.771820 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0ef5c71_7162_4911_a514_7be99e7a5cc0.slice/crio-a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6 WatchSource:0}: Error finding container a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6: Status 404 returned error can't find the container with id a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6 Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.730306 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d0ef5c71-7162-4911-a514-7be99e7a5cc0","Type":"ContainerStarted","Data":"0c6e71f4150903075bb8576cfed6582063b1fb2d2fadc1cd34bdddf1e82a2046"} Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.730766 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d0ef5c71-7162-4911-a514-7be99e7a5cc0","Type":"ContainerStarted","Data":"a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6"} Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.748870 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.748849558 podStartE2EDuration="3.748849558s" podCreationTimestamp="2026-01-30 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:29:52.746955457 +0000 UTC m=+5157.407636704" watchObservedRunningTime="2026-01-30 14:29:52.748849558 +0000 UTC m=+5157.409530785" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.395679 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.397057 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.407174 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.458809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.559855 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.579032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.714314 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.168646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.771898 5039 generic.go:334] "Generic (PLEG): container finished" podID="96c9c051-4511-4171-95f7-4819156ba132" containerID="c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae" exitCode=0 Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.772064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"96c9c051-4511-4171-95f7-4819156ba132","Type":"ContainerDied","Data":"c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae"} Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.772304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"96c9c051-4511-4171-95f7-4819156ba132","Type":"ContainerStarted","Data":"0c348fdaa9092fd138d89b19f9fea4c87ce92796d99279f8e73ebe6ca5e68b61"} Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.718074 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.718121 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.771473 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.828223 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.014859 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.111211 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.179619 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_96c9c051-4511-4171-95f7-4819156ba132/mariadb-client/0.log" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.198142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"96c9c051-4511-4171-95f7-4819156ba132\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.203689 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.204471 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg" (OuterVolumeSpecName: "kube-api-access-746zg") pod "96c9c051-4511-4171-95f7-4819156ba132" (UID: "96c9c051-4511-4171-95f7-4819156ba132"). InnerVolumeSpecName "kube-api-access-746zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.216262 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.299368 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.339390 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: E0130 14:29:58.340127 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340150 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340381 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340887 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.348150 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.401148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.503160 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.519218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.659043 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.787034 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c348fdaa9092fd138d89b19f9fea4c87ce92796d99279f8e73ebe6ca5e68b61" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.787096 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.804415 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="96c9c051-4511-4171-95f7-4819156ba132" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.050231 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:59 crc kubenswrapper[5039]: W0130 14:29:59.056899 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod351bef5d_c22e_41e0_9dbc_db3b5c973b93.slice/crio-0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c WatchSource:0}: Error finding container 0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c: Status 404 returned error can't find the container with id 0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794715 5039 generic.go:334] "Generic (PLEG): container finished" podID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerID="6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762" exitCode=0 Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794794 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"351bef5d-c22e-41e0-9dbc-db3b5c973b93","Type":"ContainerDied","Data":"6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762"} Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794820 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"351bef5d-c22e-41e0-9dbc-db3b5c973b93","Type":"ContainerStarted","Data":"0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c"} Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.795048 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4xkk" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" containerID="cri-o://53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" gracePeriod=2 Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.102289 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c9c051-4511-4171-95f7-4819156ba132" path="/var/lib/kubelet/pods/96c9c051-4511-4171-95f7-4819156ba132/volumes" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.157826 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.159164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.162493 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.162773 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.165172 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.244445 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359227 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.360605 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities" (OuterVolumeSpecName: "utilities") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.368330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5" (OuterVolumeSpecName: "kube-api-access-52hr5") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "kube-api-access-52hr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.390795 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460993 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461073 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461091 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461105 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461971 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.464893 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.477950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.537642 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804405 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" exitCode=0 Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804457 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.806211 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"88bcde481d648d5c5e2199857e3a9e11082446b8b45872408a3697133f32701b"} Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.806239 5039 scope.go:117] "RemoveContainer" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.840915 5039 scope.go:117] "RemoveContainer" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.846573 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.853542 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.856666 5039 scope.go:117] "RemoveContainer" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.873815 5039 scope.go:117] "RemoveContainer" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.874383 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": container with ID starting with 53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25 not found: ID does not exist" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874427 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} err="failed to get container status \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": rpc error: code = NotFound desc = could not find container \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": container with ID starting with 53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25 not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874454 5039 scope.go:117] "RemoveContainer" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.874848 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": container with ID starting with d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f not found: ID does not exist" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874884 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f"} err="failed to get container status \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": rpc error: code = NotFound desc = could not find container \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": container with ID starting with d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874911 5039 scope.go:117] "RemoveContainer" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.875305 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": container with ID starting with 466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f not found: ID does not exist" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.875338 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f"} err="failed to get container status \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": rpc error: code = NotFound desc = could not find container \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": container with ID starting with 466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.948938 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: W0130 14:30:00.971772 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod855d9157_ea0d_4203_bca2_8efd747adf94.slice/crio-be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97 WatchSource:0}: Error finding container be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97: Status 404 returned error can't find the container with id be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97 Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.079615 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.098609 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_351bef5d-c22e-41e0-9dbc-db3b5c973b93/mariadb-client/0.log" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.127158 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.133707 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.275296 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.294953 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2" (OuterVolumeSpecName: "kube-api-access-js9n2") pod "351bef5d-c22e-41e0-9dbc-db3b5c973b93" (UID: "351bef5d-c22e-41e0-9dbc-db3b5c973b93"). InnerVolumeSpecName "kube-api-access-js9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.377731 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.812745 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.812781 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815900 5039 generic.go:334] "Generic (PLEG): container finished" podID="855d9157-ea0d-4203-bca2-8efd747adf94" containerID="c33683d2b09111e121d14380b429fcb96af7b42a24484bea8cad41a662201ae7" exitCode=0 Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerDied","Data":"c33683d2b09111e121d14380b429fcb96af7b42a24484bea8cad41a662201ae7"} Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815996 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerStarted","Data":"be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97"} Jan 30 14:30:02 crc kubenswrapper[5039]: I0130 14:30:02.125623 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" path="/var/lib/kubelet/pods/351bef5d-c22e-41e0-9dbc-db3b5c973b93/volumes" Jan 30 14:30:02 crc kubenswrapper[5039]: I0130 14:30:02.127370 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" path="/var/lib/kubelet/pods/75f9a551-24a2-4d35-8b7c-9774386d11d7/volumes" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.129835 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217708 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217843 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.218896 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume" (OuterVolumeSpecName: "config-volume") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.223458 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.223676 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc" (OuterVolumeSpecName: "kube-api-access-4dqhc") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "kube-api-access-4dqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320025 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320072 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320087 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832274 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerDied","Data":"be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97"} Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832314 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832335 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:04 crc kubenswrapper[5039]: I0130 14:30:04.203342 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 14:30:04 crc kubenswrapper[5039]: I0130 14:30:04.210558 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 14:30:06 crc kubenswrapper[5039]: I0130 14:30:06.111065 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" path="/var/lib/kubelet/pods/7e85d509-7158-47c2-a64b-25b0d8964124/volumes" Jan 30 14:30:09 crc kubenswrapper[5039]: I0130 14:30:09.163367 5039 scope.go:117] "RemoveContainer" containerID="947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9" Jan 30 14:30:09 crc kubenswrapper[5039]: I0130 14:30:09.188825 5039 scope.go:117] "RemoveContainer" containerID="6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.075379 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076142 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-utilities" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076155 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-utilities" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076176 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076182 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076202 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076208 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-content" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076214 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-content" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076220 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076225 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076369 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076378 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.077174 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.085491 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nrr6s" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.085629 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.086570 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.092420 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.125994 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.127269 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.135304 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.137035 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.145718 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.177034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191240 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191559 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191863 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.192033 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.192165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.268064 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.273584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.276813 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.277291 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.277394 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8h52n" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.282981 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.293449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295692 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295923 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296288 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296380 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297073 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297285 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297533 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297636 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297807 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297912 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298007 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298154 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.300338 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.301048 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.301636 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.303172 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.321708 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.323765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.325423 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.325466 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/208d4633397fe3e66455ad9f8f1eeb40ff368802db72783ff51e6b651069e8a6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.327083 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.330452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.330722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.350425 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.359792 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399562 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399650 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399742 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399783 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399807 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399828 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401365 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401393 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c3d143cdafd53b931a058e7ff13993f18d66b53af30d95a9640532afac14081/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401729 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401798 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401828 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402892 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402941 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402956 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402976 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403106 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403157 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403231 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403295 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403333 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403352 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403378 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403613 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403646 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403668 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.405064 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.405281 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406471 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406825 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406858 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea88fa36f65075a7ce1afa055f9c90c2c8965a7c0819bcab2f9231d84fff77b4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.407898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.412630 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.413956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.416775 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.419170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.431840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.444243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.458089 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505644 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.506880 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.506911 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507645 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507912 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508200 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508233 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508348 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508400 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.509472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510174 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.513428 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.517503 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.519492 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.519770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.521417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525490 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525540 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c909b3dc57ce5eb9c1e97a44bff33d0ccba42a8e5e0a8a835f7ca78a0361b9f2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525628 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525674 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f57880c09109062ba88f4e92379a6769e185812f4f32bc6acd972f8836e0f114/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.528463 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.528511 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18bba15c87cc61c56c08d6183b1edbebc1d9b755612eb66c0f71a38763ada7a8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.530750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.537740 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.541421 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.567910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.574591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.587040 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.603440 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.686504 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.697351 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.747458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.817889 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: W0130 14:30:34.830366 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5493e8_291c_4677_902a_89649a59dc48.slice/crio-a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276 WatchSource:0}: Error finding container a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276: Status 404 returned error can't find the container with id a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.066718 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"8bfbc815f6c0c0d468a90921e66114edbb020348afa53735c2587123d69142b3"} Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.067003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276"} Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.105266 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.108152 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc46623_afd6_4b9d_bf3d_79700d1ee972.slice/crio-a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e WatchSource:0}: Error finding container a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e: Status 404 returned error can't find the container with id a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.223534 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.236970 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7065704_60d1_44b1_a6a6_f23a25d20a3f.slice/crio-4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1 WatchSource:0}: Error finding container 4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1: Status 404 returned error can't find the container with id 4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.328105 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.338239 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd163aa91_5efd_4b7a_94eb_c9b4f26fba7b.slice/crio-640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057 WatchSource:0}: Error finding container 640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057: Status 404 returned error can't find the container with id 640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.430360 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075683 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"3d1eeebac752a58db16fc68c43a3c48f23c67308cb52b28d71e2608dbe7a99cc"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"0049ce5fbe127ae8f1d2737d33fb7242b23a4cd220db768ca44f92fb8f0a971c"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075737 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078336 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"ddab2956067f421cff1e9f3a822956b229f23317b9c7af362beebefe7415e5b2"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078387 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"db48769a498c37609f4cb18376e43480953038a78ec282735fea77e725417a15"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"a157eb405ba716d4574a8ee991b87afe99b0b8834964c8f96dbf2aa30a36ccc9"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080646 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"d9e1fc25d2e6f68cd300d720107f1872712ff6844eba0094fdf1da6381ed50d6"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"04ca0d45b53d015091dfb4c29dcc507a044e36353210bbdd2c1d0ffc55f79c97"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080694 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083584 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"9587287751c0018562035789d50d8fb334e5e5db6fde3d501be982e9bf7a7db9"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083619 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"bce0a0396fb40f088671f9a4b5562f12bac5dcb9ca8356801bc00ca830685b2a"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.085840 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"2f8fd9d0a65e6a551029e62bf016d1b80f716552fae027b42e3da6c16df032f2"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.102769 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.102737354 podStartE2EDuration="3.102737354s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.102445556 +0000 UTC m=+5200.763126793" watchObservedRunningTime="2026-01-30 14:30:36.102737354 +0000 UTC m=+5200.763418651" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.160249 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.160228574 podStartE2EDuration="3.160228574s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.153261365 +0000 UTC m=+5200.813942602" watchObservedRunningTime="2026-01-30 14:30:36.160228574 +0000 UTC m=+5200.820909801" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.160382 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.160374778 podStartE2EDuration="3.160374778s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.126225931 +0000 UTC m=+5200.786907238" watchObservedRunningTime="2026-01-30 14:30:36.160374778 +0000 UTC m=+5200.821056015" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.173807 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.1737881310000002 podStartE2EDuration="3.173788131s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.170828121 +0000 UTC m=+5200.831509368" watchObservedRunningTime="2026-01-30 14:30:36.173788131 +0000 UTC m=+5200.834469358" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.190346 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.19032774 podStartE2EDuration="3.19032774s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.187115963 +0000 UTC m=+5200.847797210" watchObservedRunningTime="2026-01-30 14:30:36.19032774 +0000 UTC m=+5200.851008967" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.256403 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.095720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"d0efc8c27e39c5a19feb4aa4ce4bebf38bc8d8257a2f035c5c03256c532f03f9"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.096381 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"0bc35af09ceec0e41d741e6fc5badf8cf552c6687e994a9aadec28c63551f984"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.096511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"2e15ef3f132318593de804c163c0250c7504b9dd36987293aadf96cd83f710d6"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.121517 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.121495987 podStartE2EDuration="4.121495987s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:37.114655021 +0000 UTC m=+5201.775336268" watchObservedRunningTime="2026-01-30 14:30:37.121495987 +0000 UTC m=+5201.782177214" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.413280 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.458813 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.604568 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.687102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.697902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.742793 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.742850 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.747800 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.413943 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.458439 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.604525 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.687491 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.698427 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.748311 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.449108 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.489417 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.507680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.554704 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.644681 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.698142 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.699440 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.701968 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.734188 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.735919 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.755229 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.764178 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.803772 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.805483 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813621 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.814133 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915403 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915539 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916593 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916920 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.942930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.042782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.159037 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.185238 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.186425 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.195198 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.216763 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.219179 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.323230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.323636 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324051 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324346 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324404 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425847 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425902 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425976 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.426039 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.427244 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.427799 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.428550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.428919 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.445990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.513448 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.594148 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.951779 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: W0130 14:30:41.955249 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c7b5ae_068f_4c5b_a918_b89b62def454.slice/crio-90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b WatchSource:0}: Error finding container 90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b: Status 404 returned error can't find the container with id 90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152215 5039 generic.go:334] "Generic (PLEG): container finished" podID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerID="59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db" exitCode=0 Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152446 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerDied","Data":"59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerStarted","Data":"f450c5c4f90b9e698b8695bd402280aa814a229629d9e3b43346684e1cc7f9df"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.156554 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.156599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.429233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548049 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548114 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548230 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.552778 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp" (OuterVolumeSpecName: "kube-api-access-rm8gp") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "kube-api-access-rm8gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.569437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.570754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.572612 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config" (OuterVolumeSpecName: "config") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650096 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650314 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650323 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650334 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.165256 5039 generic.go:334] "Generic (PLEG): container finished" podID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" exitCode=0 Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.165306 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.166899 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerDied","Data":"f450c5c4f90b9e698b8695bd402280aa814a229629d9e3b43346684e1cc7f9df"} Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.166936 5039 scope.go:117] "RemoveContainer" containerID="59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.167002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.386516 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.394613 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.065598 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:44 crc kubenswrapper[5039]: E0130 14:30:44.066401 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.066417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.066621 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.067283 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.070448 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.080078 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.102566 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" path="/var/lib/kubelet/pods/87bbd1d7-6e9f-47e1-ae09-504b930831f9/volumes" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.177953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.178258 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.189973 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.190040 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.190206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.196062 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" podStartSLOduration=3.196042563 podStartE2EDuration="3.196042563s" podCreationTimestamp="2026-01-30 14:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:44.193309659 +0000 UTC m=+5208.853990896" watchObservedRunningTime="2026-01-30 14:30:44.196042563 +0000 UTC m=+5208.856723800" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291917 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.294750 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.294780 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b69c381c660e6ef9c1324ff887dab0dd51afc76a8b1af30d4ca42ed269d880c/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.297622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.312786 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.324650 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.392574 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.976269 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:45 crc kubenswrapper[5039]: I0130 14:30:45.189279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"2fa144db-c324-4fc0-9076-a6704fc1b00b","Type":"ContainerStarted","Data":"c98d9a51c97013766dc8676124266ae5989635002c0d2bedd21ff38e6c98bd11"} Jan 30 14:30:45 crc kubenswrapper[5039]: I0130 14:30:45.189623 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"2fa144db-c324-4fc0-9076-a6704fc1b00b","Type":"ContainerStarted","Data":"da7c2c3ad7681c46ce37b8d2b9cb3f20c87fefd4844b2d3b5be8acaabc094ff0"} Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.872089 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=6.872071626 podStartE2EDuration="6.872071626s" podCreationTimestamp="2026-01-30 14:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:45.203788337 +0000 UTC m=+5209.864469564" watchObservedRunningTime="2026-01-30 14:30:49.872071626 +0000 UTC m=+5214.532752843" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.876721 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.878217 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.880726 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.881093 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7mm4v" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.890425 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.905571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012793 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114096 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114176 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114287 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114709 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.115339 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.115339 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.120960 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.138474 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.208069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: W0130 14:30:50.662925 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2601f1_8fcd_4cf8_8e60_9c95785f395b.slice/crio-136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d WatchSource:0}: Error finding container 136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d: Status 404 returned error can't find the container with id 136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.663268 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.239571 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"e315a6cedca569193a89b5705edd471f6d4ae0e471139cac304bef1e50860880"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.239979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"c2fcc2dbbb180157cb3d5fe294940e627a789b0253438be90b859fe90562ab01"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.240001 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.240025 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.267063 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.267042533 podStartE2EDuration="2.267042533s" podCreationTimestamp="2026-01-30 14:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:51.26104156 +0000 UTC m=+5215.921722787" watchObservedRunningTime="2026-01-30 14:30:51.267042533 +0000 UTC m=+5215.927723770" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.515079 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.615224 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.615576 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" containerID="cri-o://7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" gracePeriod=10 Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.083157 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.144876 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.145005 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.145084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.150718 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d" (OuterVolumeSpecName: "kube-api-access-cpb7d") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "kube-api-access-cpb7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.189805 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.195284 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config" (OuterVolumeSpecName: "config") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247351 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247379 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247388 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.249283 5039 generic.go:334] "Generic (PLEG): container finished" podID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" exitCode=0 Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.249991 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250171 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"c95043f7ef80939f8ed4554811f0455bbc8df47a568054dd1add5edff0ec3f7d"} Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250220 5039 scope.go:117] "RemoveContainer" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.275855 5039 scope.go:117] "RemoveContainer" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.287265 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.293032 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318047 5039 scope.go:117] "RemoveContainer" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: E0130 14:30:52.318610 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": container with ID starting with 7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089 not found: ID does not exist" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318683 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} err="failed to get container status \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": rpc error: code = NotFound desc = could not find container \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": container with ID starting with 7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089 not found: ID does not exist" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318722 5039 scope.go:117] "RemoveContainer" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: E0130 14:30:52.319197 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": container with ID starting with 25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7 not found: ID does not exist" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.319232 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7"} err="failed to get container status \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": rpc error: code = NotFound desc = could not find container \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": container with ID starting with 25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7 not found: ID does not exist" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.102746 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" path="/var/lib/kubelet/pods/3e4c5897-aa67-4e1d-bd75-2431b346e43c/volumes" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.799644 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:54 crc kubenswrapper[5039]: E0130 14:30:54.799965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="init" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.799979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="init" Jan 30 14:30:54 crc kubenswrapper[5039]: E0130 14:30:54.800053 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800059 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800212 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.811835 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.887330 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.887372 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.905001 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.906249 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.911124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.914619 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.989911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.989959 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.990046 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.990065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.991155 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.008774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.091244 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.091302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.092118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.108336 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.118723 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.257923 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.573844 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:55 crc kubenswrapper[5039]: W0130 14:30:55.578319 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb551f7ea_ff24_4c3d_aeaf_2625d07d8ea6.slice/crio-ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650 WatchSource:0}: Error finding container ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650: Status 404 returned error can't find the container with id ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650 Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.723216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:55 crc kubenswrapper[5039]: W0130 14:30:55.727193 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod186c0ea5_7e75_40a9_8304_487243cd940f.slice/crio-a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969 WatchSource:0}: Error finding container a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969: Status 404 returned error can't find the container with id a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.283934 5039 generic.go:334] "Generic (PLEG): container finished" podID="186c0ea5-7e75-40a9-8304-487243cd940f" containerID="53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50" exitCode=0 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.284141 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerDied","Data":"53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.284223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerStarted","Data":"a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286091 5039 generic.go:334] "Generic (PLEG): container finished" podID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerID="1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b" exitCode=0 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerDied","Data":"1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerStarted","Data":"ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650"} Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.680101 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.742647 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"186c0ea5-7e75-40a9-8304-487243cd940f\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.742880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"186c0ea5-7e75-40a9-8304-487243cd940f\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.743703 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "186c0ea5-7e75-40a9-8304-487243cd940f" (UID: "186c0ea5-7e75-40a9-8304-487243cd940f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.751681 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76" (OuterVolumeSpecName: "kube-api-access-s2s76") pod "186c0ea5-7e75-40a9-8304-487243cd940f" (UID: "186c0ea5-7e75-40a9-8304-487243cd940f"). InnerVolumeSpecName "kube-api-access-s2s76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.800998 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.844653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.844735 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845186 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" (UID: "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.848551 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8" (OuterVolumeSpecName: "kube-api-access-pnxw8") pod "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" (UID: "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6"). InnerVolumeSpecName "kube-api-access-pnxw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.947810 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.947857 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300321 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerDied","Data":"ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650"} Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300360 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300382 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerDied","Data":"a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969"} Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302524 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.259309 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419200 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:00 crc kubenswrapper[5039]: E0130 14:31:00.419569 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419592 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: E0130 14:31:00.419616 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419626 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.420500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.422622 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.422948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.423285 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.423627 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.433729 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489170 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489424 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591346 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591424 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.597130 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.598028 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.608700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.742887 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:01 crc kubenswrapper[5039]: I0130 14:31:01.193516 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:01 crc kubenswrapper[5039]: W0130 14:31:01.204596 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbecfa43_cf6a_4f2f_bc2b_7ae9db8dd7ec.slice/crio-6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839 WatchSource:0}: Error finding container 6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839: Status 404 returned error can't find the container with id 6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839 Jan 30 14:31:01 crc kubenswrapper[5039]: I0130 14:31:01.331772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerStarted","Data":"6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839"} Jan 30 14:31:02 crc kubenswrapper[5039]: I0130 14:31:02.340402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerStarted","Data":"7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8"} Jan 30 14:31:02 crc kubenswrapper[5039]: I0130 14:31:02.363429 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qshch" podStartSLOduration=2.363408425 podStartE2EDuration="2.363408425s" podCreationTimestamp="2026-01-30 14:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:02.355647694 +0000 UTC m=+5227.016328921" watchObservedRunningTime="2026-01-30 14:31:02.363408425 +0000 UTC m=+5227.024089662" Jan 30 14:31:03 crc kubenswrapper[5039]: I0130 14:31:03.348297 5039 generic.go:334] "Generic (PLEG): container finished" podID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerID="7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8" exitCode=0 Jan 30 14:31:03 crc kubenswrapper[5039]: I0130 14:31:03.348406 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerDied","Data":"7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8"} Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.724631 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754732 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.766228 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw" (OuterVolumeSpecName: "kube-api-access-7kdgw") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "kube-api-access-7kdgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.779379 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.799031 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data" (OuterVolumeSpecName: "config-data") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857228 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857279 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857292 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380875 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerDied","Data":"6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839"} Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380921 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380992 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.623278 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:05 crc kubenswrapper[5039]: E0130 14:31:05.623879 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.623971 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.624253 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.625267 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.650133 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.655559 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659554 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.663047 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.666789 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668636 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668813 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668845 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.690662 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.770917 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.770996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771153 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771537 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772027 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772330 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772460 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.789924 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874091 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874199 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874219 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874261 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.878075 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.878511 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.880647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.882546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.894145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.898627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.964781 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.977410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:06 crc kubenswrapper[5039]: I0130 14:31:06.478912 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:06 crc kubenswrapper[5039]: W0130 14:31:06.484255 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6179370b_6aa4_431d_9770_8ccc580ce2ff.slice/crio-7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3 WatchSource:0}: Error finding container 7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3: Status 404 returned error can't find the container with id 7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3 Jan 30 14:31:06 crc kubenswrapper[5039]: I0130 14:31:06.579467 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:06 crc kubenswrapper[5039]: W0130 14:31:06.588538 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1290eb86_72db_4605_82ed_5ce51d7bdd43.slice/crio-dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5 WatchSource:0}: Error finding container dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5: Status 404 returned error can't find the container with id dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5 Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.397525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerStarted","Data":"8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.397865 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerStarted","Data":"7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.401996 5039 generic.go:334] "Generic (PLEG): container finished" podID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerID="c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d" exitCode=0 Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.402055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.402077 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerStarted","Data":"dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.437140 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4rlpk" podStartSLOduration=2.437119382 podStartE2EDuration="2.437119382s" podCreationTimestamp="2026-01-30 14:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:07.424110579 +0000 UTC m=+5232.084791816" watchObservedRunningTime="2026-01-30 14:31:07.437119382 +0000 UTC m=+5232.097800629" Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.742887 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.743274 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:31:08 crc kubenswrapper[5039]: I0130 14:31:08.410601 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerStarted","Data":"3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58"} Jan 30 14:31:08 crc kubenswrapper[5039]: I0130 14:31:08.439487 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" podStartSLOduration=3.439468919 podStartE2EDuration="3.439468919s" podCreationTimestamp="2026-01-30 14:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:08.43325892 +0000 UTC m=+5233.093940147" watchObservedRunningTime="2026-01-30 14:31:08.439468919 +0000 UTC m=+5233.100150146" Jan 30 14:31:09 crc kubenswrapper[5039]: I0130 14:31:09.419308 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.013988 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.017485 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.031676 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043767 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043805 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.144915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145939 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145928 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.176843 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.341077 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.440900 5039 generic.go:334] "Generic (PLEG): container finished" podID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerID="8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047" exitCode=0 Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.441073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerDied","Data":"8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047"} Jan 30 14:31:10 crc kubenswrapper[5039]: W0130 14:31:10.801569 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a32b9f3_d031_40f2_926f_d69de45d6d04.slice/crio-93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f WatchSource:0}: Error finding container 93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f: Status 404 returned error can't find the container with id 93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.803445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.467698 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" exitCode=0 Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.469137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372"} Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.469166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f"} Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.470977 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.791224 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980759 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980785 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980833 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.985964 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts" (OuterVolumeSpecName: "scripts") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.986032 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.986071 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.990153 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz" (OuterVolumeSpecName: "kube-api-access-r48fz") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "kube-api-access-r48fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.003450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.004919 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data" (OuterVolumeSpecName: "config-data") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083486 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083522 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083532 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083541 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083549 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083556 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.480999 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.481046 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerDied","Data":"7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3"} Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.482321 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.490255 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.546963 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.554542 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643405 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:12 crc kubenswrapper[5039]: E0130 14:31:12.643723 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643741 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643897 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.644444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651440 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651716 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651731 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651875 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.656447 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.660275 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694482 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694622 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694663 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694734 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694808 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.796788 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797178 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797977 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.798208 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.802712 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.802784 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.803347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.803480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.816948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.828844 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.963864 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.379630 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:13 crc kubenswrapper[5039]: W0130 14:31:13.390742 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7902ea8d_9313_4ce7_8813_9b758308b6e5.slice/crio-e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89 WatchSource:0}: Error finding container e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89: Status 404 returned error can't find the container with id e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89 Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.503798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerStarted","Data":"e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89"} Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.510911 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" exitCode=0 Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.510969 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.104552 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" path="/var/lib/kubelet/pods/6179370b-6aa4-431d-9770-8ccc580ce2ff/volumes" Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.522153 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.524522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerStarted","Data":"c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.541224 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bw4vw" podStartSLOduration=2.978027637 podStartE2EDuration="5.541207439s" podCreationTimestamp="2026-01-30 14:31:09 +0000 UTC" firstStartedPulling="2026-01-30 14:31:11.470729297 +0000 UTC m=+5236.131410524" lastFinishedPulling="2026-01-30 14:31:14.033909099 +0000 UTC m=+5238.694590326" observedRunningTime="2026-01-30 14:31:14.54014578 +0000 UTC m=+5239.200827017" watchObservedRunningTime="2026-01-30 14:31:14.541207439 +0000 UTC m=+5239.201888666" Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.567035 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rbkmw" podStartSLOduration=2.566999889 podStartE2EDuration="2.566999889s" podCreationTimestamp="2026-01-30 14:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:14.559649509 +0000 UTC m=+5239.220330756" watchObservedRunningTime="2026-01-30 14:31:14.566999889 +0000 UTC m=+5239.227681136" Jan 30 14:31:15 crc kubenswrapper[5039]: I0130 14:31:15.966194 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.022239 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.022502 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" containerID="cri-o://5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" gracePeriod=10 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.509755 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.541036 5039 generic.go:334] "Generic (PLEG): container finished" podID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerID="c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517" exitCode=0 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.541087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerDied","Data":"c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543772 5039 generic.go:334] "Generic (PLEG): container finished" podID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" exitCode=0 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543802 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543818 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543833 5039 scope.go:117] "RemoveContainer" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543925 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.566929 5039 scope.go:117] "RemoveContainer" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.610899 5039 scope.go:117] "RemoveContainer" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.611699 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": container with ID starting with 5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626 not found: ID does not exist" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.611734 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} err="failed to get container status \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": rpc error: code = NotFound desc = could not find container \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": container with ID starting with 5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626 not found: ID does not exist" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.611761 5039 scope.go:117] "RemoveContainer" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.612431 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": container with ID starting with d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b not found: ID does not exist" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.612477 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} err="failed to get container status \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": rpc error: code = NotFound desc = could not find container \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": container with ID starting with d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b not found: ID does not exist" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658170 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658342 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658445 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658471 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.668909 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf" (OuterVolumeSpecName: "kube-api-access-mllrf") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "kube-api-access-mllrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.695666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.700841 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config" (OuterVolumeSpecName: "config") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.701279 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc podName:16c7b5ae-068f-4c5b-a918-b89b62def454 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:17.201253596 +0000 UTC m=+5241.861934833 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454") : error deleting /var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volume-subpaths: remove /var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volume-subpaths: no such file or directory Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.701432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760619 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760656 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760668 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760679 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.268086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.269577 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.371142 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.474384 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.480499 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.828312 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.980880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.980975 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981153 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981211 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981283 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981372 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.985913 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.985935 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt" (OuterVolumeSpecName: "kube-api-access-b4pwt") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "kube-api-access-b4pwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.987027 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.987045 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts" (OuterVolumeSpecName: "scripts") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.003639 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.006442 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data" (OuterVolumeSpecName: "config-data") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082699 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082732 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082741 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082750 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082762 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082770 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.102633 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" path="/var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volumes" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.559957 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerDied","Data":"e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89"} Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.560225 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.560043 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921396 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921727 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921746 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921770 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="init" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921778 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="init" Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921793 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921801 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921998 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.922038 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.922577 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.924522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.924971 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.925199 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.927114 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.980190 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095357 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095423 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095554 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095570 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197565 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197600 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197621 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.202700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.203187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.204788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.204921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.209631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.223600 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.240349 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.736920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.341470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.341782 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.386002 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.576065 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f95777885-dfppg" event={"ID":"cf6c7271-2040-4fdf-9920-6842976f8ebc","Type":"ContainerStarted","Data":"1a1b7af9b469ad48e52152d6216cc56b6b10206616a42a8122a3b772d364bc3c"} Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.576116 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f95777885-dfppg" event={"ID":"cf6c7271-2040-4fdf-9920-6842976f8ebc","Type":"ContainerStarted","Data":"e5d17940aa2dba31a4da3d90a5e7de35925b8b826543892426707c6773b467ad"} Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.599336 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f95777885-dfppg" podStartSLOduration=2.599312805 podStartE2EDuration="2.599312805s" podCreationTimestamp="2026-01-30 14:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:20.592450789 +0000 UTC m=+5245.253132026" watchObservedRunningTime="2026-01-30 14:31:20.599312805 +0000 UTC m=+5245.259994042" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.624364 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.665320 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:21 crc kubenswrapper[5039]: I0130 14:31:21.582843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:22 crc kubenswrapper[5039]: I0130 14:31:22.590150 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bw4vw" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" containerID="cri-o://98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" gracePeriod=2 Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.115276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.172893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.172986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.173088 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.174630 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities" (OuterVolumeSpecName: "utilities") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.178271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47" (OuterVolumeSpecName: "kube-api-access-b9x47") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "kube-api-access-b9x47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.275811 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.275843 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599616 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" exitCode=0 Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.600787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f"} Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599741 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.600853 5039 scope.go:117] "RemoveContainer" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.621282 5039 scope.go:117] "RemoveContainer" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.638200 5039 scope.go:117] "RemoveContainer" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.673321 5039 scope.go:117] "RemoveContainer" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.674150 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": container with ID starting with 98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c not found: ID does not exist" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.674217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} err="failed to get container status \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": rpc error: code = NotFound desc = could not find container \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": container with ID starting with 98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c not found: ID does not exist" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.674307 5039 scope.go:117] "RemoveContainer" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.674930 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": container with ID starting with bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9 not found: ID does not exist" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675108 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} err="failed to get container status \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": rpc error: code = NotFound desc = could not find container \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": container with ID starting with bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9 not found: ID does not exist" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675204 5039 scope.go:117] "RemoveContainer" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.675819 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": container with ID starting with efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372 not found: ID does not exist" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675872 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372"} err="failed to get container status \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": rpc error: code = NotFound desc = could not find container \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": container with ID starting with efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372 not found: ID does not exist" Jan 30 14:31:24 crc kubenswrapper[5039]: I0130 14:31:24.866971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:31:24 crc kubenswrapper[5039]: I0130 14:31:24.904796 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:25 crc kubenswrapper[5039]: I0130 14:31:25.131580 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:25 crc kubenswrapper[5039]: I0130 14:31:25.138757 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:26 crc kubenswrapper[5039]: I0130 14:31:26.105185 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" path="/var/lib/kubelet/pods/2a32b9f3-d031-40f2-926f-d69de45d6d04/volumes" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746235 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746879 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746925 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.747614 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.747673 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" gracePeriod=600 Jan 30 14:31:37 crc kubenswrapper[5039]: E0130 14:31:37.866985 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726201 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" exitCode=0 Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726334 5039 scope.go:117] "RemoveContainer" containerID="c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726978 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:31:38 crc kubenswrapper[5039]: E0130 14:31:38.727277 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:50 crc kubenswrapper[5039]: I0130 14:31:50.715182 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:53 crc kubenswrapper[5039]: I0130 14:31:53.093808 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:31:53 crc kubenswrapper[5039]: E0130 14:31:53.094348 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.874356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.876338 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879071 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.879188 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-content" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879246 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-content" Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.879348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-utilities" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879415 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-utilities" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.880708 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.889812 5039 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: secrets "openstack-config-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.890703 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-config-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.890514 5039 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p": failed to list *v1.Secret: secrets "openstackclient-openstackclient-dockercfg-cdw7p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.891199 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-cdw7p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstackclient-openstackclient-dockercfg-cdw7p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.890582 5039 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: configmaps "openstack-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.891343 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openstack-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.926827 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.947286 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.948153 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kj57s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[kube-api-access-kj57s openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="8879cff9-d62e-49a6-9013-dab19e60a75b" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.967859 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.975130 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.976498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.981701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.999109 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009384 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.110825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111122 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111198 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111743 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.113758 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kj57s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.113839 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:55.613817167 +0000 UTC m=+5280.274498464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kj57s" (UniqueName: "kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.213695 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.214052 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.214184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.235804 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.620975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.623045 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kj57s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.623231 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.623208644 +0000 UTC m=+5281.283889951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kj57s" (UniqueName: "kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.711294 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.858877 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.864653 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.868967 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.871735 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.925770 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.103087 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8879cff9-d62e-49a6-9013-dab19e60a75b" path="/var/lib/kubelet/pods/8879cff9-d62e-49a6-9013-dab19e60a75b/volumes" Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111637 5039 configmap.go:193] Couldn't get configMap openstack/openstack-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111651 5039 secret.go:188] Couldn't get secret openstack/openstack-config-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111730 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.611713554 +0000 UTC m=+5281.272394781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111745 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.611738955 +0000 UTC m=+5281.272420182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.131185 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.131223 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.214830 5039 secret.go:188] Couldn't get secret openstack/openstack-config-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.215193 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret podName:5f9710bf-722a-4504-b0c6-3ea395807a75 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.71516796 +0000 UTC m=+5281.375849197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret") pod "openstackclient" (UID: "5f9710bf-722a-4504-b0c6-3ea395807a75") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.214861 5039 configmap.go:193] Couldn't get configMap openstack/openstack-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.215419 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config podName:5f9710bf-722a-4504-b0c6-3ea395807a75 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.715405136 +0000 UTC m=+5281.376086373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config") pod "openstackclient" (UID: "5f9710bf-722a-4504-b0c6-3ea395807a75") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.227710 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.381284 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.741623 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.741721 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.742841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.748940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.794422 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.803198 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.865805 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.871454 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.909529 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.234900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:57 crc kubenswrapper[5039]: W0130 14:31:57.246463 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9710bf_722a_4504_b0c6_3ea395807a75.slice/crio-ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3 WatchSource:0}: Error finding container ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3: Status 404 returned error can't find the container with id ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3 Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.877584 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f9710bf-722a-4504-b0c6-3ea395807a75","Type":"ContainerStarted","Data":"93883c64bd99289de23a4304713ed6a5fd46067c17458c6439e84afcc9066502"} Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.877645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f9710bf-722a-4504-b0c6-3ea395807a75","Type":"ContainerStarted","Data":"ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3"} Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.897136 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.897114539 podStartE2EDuration="3.897114539s" podCreationTimestamp="2026-01-30 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:57.895393953 +0000 UTC m=+5282.556075230" watchObservedRunningTime="2026-01-30 14:31:57.897114539 +0000 UTC m=+5282.557795776" Jan 30 14:32:08 crc kubenswrapper[5039]: I0130 14:32:08.094052 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:08 crc kubenswrapper[5039]: E0130 14:32:08.094879 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:23 crc kubenswrapper[5039]: I0130 14:32:23.093715 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:23 crc kubenswrapper[5039]: E0130 14:32:23.094501 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:38 crc kubenswrapper[5039]: I0130 14:32:38.094192 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:38 crc kubenswrapper[5039]: E0130 14:32:38.094987 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:49 crc kubenswrapper[5039]: I0130 14:32:49.093671 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:49 crc kubenswrapper[5039]: E0130 14:32:49.094774 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:04 crc kubenswrapper[5039]: I0130 14:33:04.094863 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:04 crc kubenswrapper[5039]: E0130 14:33:04.095792 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:19 crc kubenswrapper[5039]: I0130 14:33:19.094236 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:19 crc kubenswrapper[5039]: E0130 14:33:19.095308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.483356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.485281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.491539 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.493026 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.494307 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.499808 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.505868 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654155 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654196 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755636 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755746 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755795 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755842 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.756863 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.757032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.780998 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.789580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.857496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.868930 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.304908 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.353818 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:33 crc kubenswrapper[5039]: W0130 14:33:33.368180 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf140476b_d9d4_4ca6_bac1_d4f91a64c18b.slice/crio-95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82 WatchSource:0}: Error finding container 95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82: Status 404 returned error can't find the container with id 95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82 Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.948941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerStarted","Data":"c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113"} Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.950026 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerStarted","Data":"95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82"} Jan 30 14:33:34 crc kubenswrapper[5039]: I0130 14:33:34.093800 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:34 crc kubenswrapper[5039]: E0130 14:33:34.094119 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:38 crc kubenswrapper[5039]: I0130 14:33:38.985582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerStarted","Data":"d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146"} Jan 30 14:33:38 crc kubenswrapper[5039]: I0130 14:33:38.987692 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerStarted","Data":"b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c"} Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.002801 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c014-account-create-update-px7xb" podStartSLOduration=7.002779363 podStartE2EDuration="7.002779363s" podCreationTimestamp="2026-01-30 14:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:38.999972817 +0000 UTC m=+5383.660654064" watchObservedRunningTime="2026-01-30 14:33:39.002779363 +0000 UTC m=+5383.663460610" Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.022245 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-75gqg" podStartSLOduration=7.02222698 podStartE2EDuration="7.02222698s" podCreationTimestamp="2026-01-30 14:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:39.016257539 +0000 UTC m=+5383.676938766" watchObservedRunningTime="2026-01-30 14:33:39.02222698 +0000 UTC m=+5383.682908207" Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.995859 5039 generic.go:334] "Generic (PLEG): container finished" podID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerID="b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c" exitCode=0 Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.995926 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerDied","Data":"b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c"} Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.997423 5039 generic.go:334] "Generic (PLEG): container finished" podID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerID="d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146" exitCode=0 Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.997461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerDied","Data":"d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146"} Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.394262 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.401475 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581272 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"c11ff9c9-2927-49d7-a52b-995f63c75e72\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581405 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"c11ff9c9-2927-49d7-a52b-995f63c75e72\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581479 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.582342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c11ff9c9-2927-49d7-a52b-995f63c75e72" (UID: "c11ff9c9-2927-49d7-a52b-995f63c75e72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.582344 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f140476b-d9d4-4ca6-bac1-d4f91a64c18b" (UID: "f140476b-d9d4-4ca6-bac1-d4f91a64c18b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.589374 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb" (OuterVolumeSpecName: "kube-api-access-mwxpb") pod "c11ff9c9-2927-49d7-a52b-995f63c75e72" (UID: "c11ff9c9-2927-49d7-a52b-995f63c75e72"). InnerVolumeSpecName "kube-api-access-mwxpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.591230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn" (OuterVolumeSpecName: "kube-api-access-wshmn") pod "f140476b-d9d4-4ca6-bac1-d4f91a64c18b" (UID: "f140476b-d9d4-4ca6-bac1-d4f91a64c18b"). InnerVolumeSpecName "kube-api-access-wshmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683033 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683079 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683090 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683101 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerDied","Data":"c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113"} Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044598 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044225 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046546 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerDied","Data":"95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82"} Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046594 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046658 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902198 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:42 crc kubenswrapper[5039]: E0130 14:33:42.902937 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902957 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: E0130 14:33:42.902987 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903215 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903239 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903904 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.907948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rg77l" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.908242 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.914561 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103703 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103794 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103826 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.205976 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.206074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.206195 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.212190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.212348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.227832 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.522252 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.968702 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:44 crc kubenswrapper[5039]: I0130 14:33:44.062163 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerStarted","Data":"b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64"} Jan 30 14:33:45 crc kubenswrapper[5039]: I0130 14:33:45.071400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerStarted","Data":"ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24"} Jan 30 14:33:45 crc kubenswrapper[5039]: I0130 14:33:45.088529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ttzhq" podStartSLOduration=3.088510319 podStartE2EDuration="3.088510319s" podCreationTimestamp="2026-01-30 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:45.083709389 +0000 UTC m=+5389.744390616" watchObservedRunningTime="2026-01-30 14:33:45.088510319 +0000 UTC m=+5389.749191566" Jan 30 14:33:46 crc kubenswrapper[5039]: I0130 14:33:46.099410 5039 generic.go:334] "Generic (PLEG): container finished" podID="5c1e26bd-8401-41c3-b195-93755cd10148" containerID="ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24" exitCode=0 Jan 30 14:33:46 crc kubenswrapper[5039]: I0130 14:33:46.106199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerDied","Data":"ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24"} Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.416316 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484421 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.489271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m" (OuterVolumeSpecName: "kube-api-access-9bl2m") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "kube-api-access-9bl2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.492245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.512432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587088 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587157 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587174 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131796 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerDied","Data":"b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64"} Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131845 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131866 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.321604 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:48 crc kubenswrapper[5039]: E0130 14:33:48.322144 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.322169 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.322402 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.323519 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.327954 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.329189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.329483 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rg77l" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.330278 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.331736 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.334396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.347432 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.364499 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399766 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399834 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399921 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400116 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.474850 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.477093 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.482334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502031 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502087 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502135 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502173 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502191 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502217 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502236 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502258 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502280 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502350 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.503415 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.503897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.511548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.512503 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.515176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.528702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.529029 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.532154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.536213 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539144 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539350 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539886 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.549505 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.579469 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604364 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604413 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604445 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604472 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604525 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604587 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604605 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.605414 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.606311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.606800 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.607379 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.623583 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.656422 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.666529 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708342 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708398 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.709715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.713388 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.718885 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.724406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.741343 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.796844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.916779 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.094348 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:49 crc kubenswrapper[5039]: E0130 14:33:49.094568 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.182028 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.242911 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:49 crc kubenswrapper[5039]: W0130 14:33:49.249097 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2dedf26_e8a7_43d7_9113_844ed4ace24f.slice/crio-fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d WatchSource:0}: Error finding container fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d: Status 404 returned error can't find the container with id fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.309209 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.456463 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156413 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"ad7eb1266f2a4bb2d57aca81452baa397286ba603fd9d91ac0282354e8e373ff"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"2bc0f4b83c16c9cd8b701ca64b36cfa85de1b32d2f7bbf992f1539632044db3b"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156938 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156952 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"df7488ce4e16755cac70e5b2358f2721d0985917211dcb34e6e2b3d93aad74f4"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156973 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.191156 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bf9dd66-4rnjv" podStartSLOduration=2.19112953 podStartE2EDuration="2.19112953s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.177898591 +0000 UTC m=+5394.838579838" watchObservedRunningTime="2026-01-30 14:33:50.19112953 +0000 UTC m=+5394.851810757" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"8375dcaf3a5cf261caf52fbdcf8d4933f79ab5a1a673a2a890e24ee6d5035b7f"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194407 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"2b9ee82cacf343d23bcce9983bdeb243f47217850e5e2b112cc1f950980428bb"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"d67adb38a06163edeb276fd458a1d4adde327252e9ad4e658680b99926c59078"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197486 5039 generic.go:334] "Generic (PLEG): container finished" podID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerID="6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793" exitCode=0 Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197585 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerStarted","Data":"3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202071 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"14a30bb3ec659c264a83df6780dc2ed0ec32eb51dbd802a38863bbd8285122b0"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"fb110868a09e13bf784547be1c520feedb30a92e69311845a681003bdd40baf4"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.218787 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" podStartSLOduration=2.218773159 podStartE2EDuration="2.218773159s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.214975096 +0000 UTC m=+5394.875656333" watchObservedRunningTime="2026-01-30 14:33:50.218773159 +0000 UTC m=+5394.879454386" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.279682 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c47676b89-c2bdw" podStartSLOduration=2.279659421 podStartE2EDuration="2.279659421s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.234848656 +0000 UTC m=+5394.895529903" watchObservedRunningTime="2026-01-30 14:33:50.279659421 +0000 UTC m=+5394.940340658" Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.215589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerStarted","Data":"c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1"} Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.217942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.243363 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" podStartSLOduration=3.24334644 podStartE2EDuration="3.24334644s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:51.241799368 +0000 UTC m=+5395.902480595" watchObservedRunningTime="2026-01-30 14:33:51.24334644 +0000 UTC m=+5395.904027657" Jan 30 14:33:55 crc kubenswrapper[5039]: I0130 14:33:55.069637 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:33:55 crc kubenswrapper[5039]: I0130 14:33:55.079488 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:33:56 crc kubenswrapper[5039]: I0130 14:33:56.107824 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" path="/var/lib/kubelet/pods/5a71a921-7519-4576-8fa4-c4d16d4a1cde/volumes" Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.799303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.867390 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.867953 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" containerID="cri-o://3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" gracePeriod=10 Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.287592 5039 generic.go:334] "Generic (PLEG): container finished" podID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerID="3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" exitCode=0 Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.287649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58"} Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.361190 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398402 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398521 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398591 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398689 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398713 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.428746 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n" (OuterVolumeSpecName: "kube-api-access-fpb6n") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "kube-api-access-fpb6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.471821 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.477277 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.484614 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config" (OuterVolumeSpecName: "config") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501378 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501420 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501429 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501440 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.519988 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.603176 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.093739 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:00 crc kubenswrapper[5039]: E0130 14:34:00.094478 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295707 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5"} Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295758 5039 scope.go:117] "RemoveContainer" containerID="3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295875 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.323572 5039 scope.go:117] "RemoveContainer" containerID="c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.328238 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.333421 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.563217 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.596838 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:34:02 crc kubenswrapper[5039]: I0130 14:34:02.106365 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" path="/var/lib/kubelet/pods/1290eb86-72db-4605-82ed-5ce51d7bdd43/volumes" Jan 30 14:34:04 crc kubenswrapper[5039]: E0130 14:34:04.750297 5039 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.188:59444->38.102.83.188:34017: read tcp 38.102.83.188:59444->38.102.83.188:34017: read: connection reset by peer Jan 30 14:34:09 crc kubenswrapper[5039]: I0130 14:34:09.348101 5039 scope.go:117] "RemoveContainer" containerID="8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.275980 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:11 crc kubenswrapper[5039]: E0130 14:34:11.277965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278170 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: E0130 14:34:11.278266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="init" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278341 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="init" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278641 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.279449 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.285401 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.341968 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.342106 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.379847 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.381677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.384848 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.391151 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443470 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.444684 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.470995 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545047 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545118 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545931 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.561728 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.597530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.700544 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:12 crc kubenswrapper[5039]: W0130 14:34:12.035727 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbabc668e_cf9b_4d6c_8a45_f79e141cfc0e.slice/crio-13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3 WatchSource:0}: Error finding container 13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3: Status 404 returned error can't find the container with id 13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.037684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.122701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:12 crc kubenswrapper[5039]: W0130 14:34:12.124605 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c46ecdf_d569_4ebc_8963_909b6e460e18.slice/crio-5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380 WatchSource:0}: Error finding container 5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380: Status 404 returned error can't find the container with id 5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.412979 5039 generic.go:334] "Generic (PLEG): container finished" podID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerID="e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5" exitCode=0 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.413083 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerDied","Data":"e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.413173 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerStarted","Data":"13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.416404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerStarted","Data":"31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.416456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerStarted","Data":"5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.446817 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb18-account-create-update-kkffq" podStartSLOduration=1.446795719 podStartE2EDuration="1.446795719s" podCreationTimestamp="2026-01-30 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:12.442625905 +0000 UTC m=+5417.103307152" watchObservedRunningTime="2026-01-30 14:34:12.446795719 +0000 UTC m=+5417.107476956" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.093419 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:13 crc kubenswrapper[5039]: E0130 14:34:13.093688 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.431820 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerID="31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e" exitCode=0 Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.431887 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerDied","Data":"31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e"} Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.868054 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.983797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.983861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.984638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "babc668e-cf9b-4d6c-8a45-f79e141cfc0e" (UID: "babc668e-cf9b-4d6c-8a45-f79e141cfc0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.990313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f" (OuterVolumeSpecName: "kube-api-access-bkx5f") pod "babc668e-cf9b-4d6c-8a45-f79e141cfc0e" (UID: "babc668e-cf9b-4d6c-8a45-f79e141cfc0e"). InnerVolumeSpecName "kube-api-access-bkx5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.085431 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.085466 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.442081 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.442097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerDied","Data":"13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3"} Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.444046 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.738276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796376 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"9c46ecdf-d569-4ebc-8963-909b6e460e18\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796493 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"9c46ecdf-d569-4ebc-8963-909b6e460e18\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796927 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c46ecdf-d569-4ebc-8963-909b6e460e18" (UID: "9c46ecdf-d569-4ebc-8963-909b6e460e18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.803311 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc" (OuterVolumeSpecName: "kube-api-access-gfdvc") pod "9c46ecdf-d569-4ebc-8963-909b6e460e18" (UID: "9c46ecdf-d569-4ebc-8963-909b6e460e18"). InnerVolumeSpecName "kube-api-access-gfdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.897831 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.897874 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.454174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerDied","Data":"5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380"} Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.455379 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380" Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.454219 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.636709 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:16 crc kubenswrapper[5039]: E0130 14:34:16.637175 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637193 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: E0130 14:34:16.637203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637210 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637413 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637433 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.640694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.645941 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.646164 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5g8q" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.646115 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.647864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.829948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.830650 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.830906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.932782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.933243 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.933338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.939137 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.942068 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.950997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.967187 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:17 crc kubenswrapper[5039]: I0130 14:34:17.396334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:17 crc kubenswrapper[5039]: I0130 14:34:17.470082 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerStarted","Data":"b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8"} Jan 30 14:34:18 crc kubenswrapper[5039]: I0130 14:34:18.478497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerStarted","Data":"1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b"} Jan 30 14:34:22 crc kubenswrapper[5039]: I0130 14:34:22.507759 5039 generic.go:334] "Generic (PLEG): container finished" podID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerID="1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b" exitCode=0 Jan 30 14:34:22 crc kubenswrapper[5039]: I0130 14:34:22.507847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerDied","Data":"1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b"} Jan 30 14:34:23 crc kubenswrapper[5039]: I0130 14:34:23.885863 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061616 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061890 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.068342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx" (OuterVolumeSpecName: "kube-api-access-sz2kx") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "kube-api-access-sz2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.086050 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config" (OuterVolumeSpecName: "config") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.088035 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163531 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163562 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163574 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527742 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerDied","Data":"b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8"} Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527789 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527819 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.700980 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:24 crc kubenswrapper[5039]: E0130 14:34:24.701374 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.701386 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.701549 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.702418 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.750986 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.790181 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.791812 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.794893 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.799129 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.800853 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.802205 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5g8q" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875113 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875226 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875288 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875348 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875371 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875415 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976628 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976662 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976693 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976712 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976734 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976767 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.977567 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.977921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.978859 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.979556 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.991335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.991826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.992108 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.002052 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.003370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.028695 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.126321 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.527131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:25 crc kubenswrapper[5039]: W0130 14:34:25.533634 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b27add_74bb_40a6_a6ba_f2b2b1d23606.slice/crio-e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972 WatchSource:0}: Error finding container e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972: Status 404 returned error can't find the container with id e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972 Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.727967 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:25 crc kubenswrapper[5039]: W0130 14:34:25.735630 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bff807_c195_4e08_8858_545f15d0b179.slice/crio-f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d WatchSource:0}: Error finding container f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d: Status 404 returned error can't find the container with id f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.099998 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:26 crc kubenswrapper[5039]: E0130 14:34:26.100333 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.545670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"daf80a1d554aa4ae24260e3471e4afc1e4c77e1b57aeb5b18e9b2299353f5760"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"46f7cd001186c5a3bc408805ccbd8e2376046e0c43d4a67263cc9137a479d43d"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546115 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546128 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547866 5039 generic.go:334] "Generic (PLEG): container finished" podID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerID="f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970" exitCode=0 Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547918 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547946 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerStarted","Data":"e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.571918 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55d685cc65-wskfp" podStartSLOduration=2.571896329 podStartE2EDuration="2.571896329s" podCreationTimestamp="2026-01-30 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:26.567772697 +0000 UTC m=+5431.228453944" watchObservedRunningTime="2026-01-30 14:34:26.571896329 +0000 UTC m=+5431.232577556" Jan 30 14:34:27 crc kubenswrapper[5039]: I0130 14:34:27.555901 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerStarted","Data":"b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0"} Jan 30 14:34:27 crc kubenswrapper[5039]: I0130 14:34:27.575287 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" podStartSLOduration=3.575267594 podStartE2EDuration="3.575267594s" podCreationTimestamp="2026-01-30 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:27.572686444 +0000 UTC m=+5432.233367701" watchObservedRunningTime="2026-01-30 14:34:27.575267594 +0000 UTC m=+5432.235948821" Jan 30 14:34:28 crc kubenswrapper[5039]: I0130 14:34:28.562462 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.030215 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.086814 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.087141 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" containerID="cri-o://c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" gracePeriod=10 Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.621948 5039 generic.go:334] "Generic (PLEG): container finished" podID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerID="c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" exitCode=0 Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622032 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1"} Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e"} Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622076 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.683230 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754329 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754471 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754540 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754594 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.762300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28" (OuterVolumeSpecName: "kube-api-access-lpj28") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "kube-api-access-lpj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.793445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config" (OuterVolumeSpecName: "config") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.794684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.797499 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.798101 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856473 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856512 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856522 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856533 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856541 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.628672 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.649850 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.657229 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:38 crc kubenswrapper[5039]: I0130 14:34:38.106339 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" path="/var/lib/kubelet/pods/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6/volumes" Jan 30 14:34:39 crc kubenswrapper[5039]: I0130 14:34:39.093679 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:39 crc kubenswrapper[5039]: E0130 14:34:39.094083 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:54 crc kubenswrapper[5039]: I0130 14:34:54.093496 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:54 crc kubenswrapper[5039]: E0130 14:34:54.094289 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:55 crc kubenswrapper[5039]: I0130 14:34:55.137424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.307326 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:02 crc kubenswrapper[5039]: E0130 14:35:02.308301 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308319 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: E0130 14:35:02.308335 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="init" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308343 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="init" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308516 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.309133 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.319635 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.325129 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.325385 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.400371 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.401631 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.405583 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.413315 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426555 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.427425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.444256 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.529065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.529217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.530774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.549412 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.637685 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.724194 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.255189 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.308212 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:03 crc kubenswrapper[5039]: W0130 14:35:03.319705 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf58690d3_b736_4e20_973e_dc1a555592a1.slice/crio-95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3 WatchSource:0}: Error finding container 95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3: Status 404 returned error can't find the container with id 95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867052 5039 generic.go:334] "Generic (PLEG): container finished" podID="f58690d3-b736-4e20-973e-dc1a555592a1" containerID="7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515" exitCode=0 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerDied","Data":"7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867192 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerStarted","Data":"95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870701 5039 generic.go:334] "Generic (PLEG): container finished" podID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerID="f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695" exitCode=0 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerDied","Data":"f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerStarted","Data":"d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.234657 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.240452 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"de9c141b-39af-4717-91c7-32de6df6ca1d\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"f58690d3-b736-4e20-973e-dc1a555592a1\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273277 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"f58690d3-b736-4e20-973e-dc1a555592a1\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273452 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"de9c141b-39af-4717-91c7-32de6df6ca1d\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.274378 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f58690d3-b736-4e20-973e-dc1a555592a1" (UID: "f58690d3-b736-4e20-973e-dc1a555592a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.274531 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de9c141b-39af-4717-91c7-32de6df6ca1d" (UID: "de9c141b-39af-4717-91c7-32de6df6ca1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.279494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r" (OuterVolumeSpecName: "kube-api-access-4l62r") pod "f58690d3-b736-4e20-973e-dc1a555592a1" (UID: "f58690d3-b736-4e20-973e-dc1a555592a1"). InnerVolumeSpecName "kube-api-access-4l62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.280049 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6" (OuterVolumeSpecName: "kube-api-access-wcht6") pod "de9c141b-39af-4717-91c7-32de6df6ca1d" (UID: "de9c141b-39af-4717-91c7-32de6df6ca1d"). InnerVolumeSpecName "kube-api-access-wcht6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376066 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376104 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376119 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376130 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888411 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerDied","Data":"95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888476 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890445 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerDied","Data":"d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890620 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890523 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.106440 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.107398 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.558973 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.559708 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559735 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.559751 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559760 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559964 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559987 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.560984 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.566876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f5l5t" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.566887 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.572862 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617077 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617275 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617333 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719588 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719688 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.725529 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.726594 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.727778 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.739189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.877195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:08 crc kubenswrapper[5039]: I0130 14:35:08.424964 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:08 crc kubenswrapper[5039]: I0130 14:35:08.928383 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerStarted","Data":"ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895"} Jan 30 14:35:09 crc kubenswrapper[5039]: I0130 14:35:09.937764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerStarted","Data":"3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e"} Jan 30 14:35:09 crc kubenswrapper[5039]: I0130 14:35:09.958923 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cl4vn" podStartSLOduration=2.958907539 podStartE2EDuration="2.958907539s" podCreationTimestamp="2026-01-30 14:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:09.956500814 +0000 UTC m=+5474.617182041" watchObservedRunningTime="2026-01-30 14:35:09.958907539 +0000 UTC m=+5474.619588767" Jan 30 14:35:12 crc kubenswrapper[5039]: I0130 14:35:12.963379 5039 generic.go:334] "Generic (PLEG): container finished" podID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerID="3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e" exitCode=0 Jan 30 14:35:12 crc kubenswrapper[5039]: I0130 14:35:12.963471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerDied","Data":"3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e"} Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.347933 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.432598 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433222 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.438538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9" (OuterVolumeSpecName: "kube-api-access-qlzd9") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "kube-api-access-qlzd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.439109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.463862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.501702 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data" (OuterVolumeSpecName: "config-data") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535174 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535242 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535259 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535274 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.982734 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerDied","Data":"ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895"} Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.983153 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.982806 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.267425 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: E0130 14:35:15.287664 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.287707 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.287931 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.288968 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.289142 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.291444 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.291635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.293080 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.310887 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f5l5t" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348614 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348694 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348761 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348986 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.349270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.349328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.363564 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.365335 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.387531 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.443078 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.444409 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.447872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.450944 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451394 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451485 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451504 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451564 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451603 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451647 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451680 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451703 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451975 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.453557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.457168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.458954 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.459506 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.470865 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.475224 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552661 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552716 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552737 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552792 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552862 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552933 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552969 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552994 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553031 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553554 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553708 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.554182 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.571611 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.605677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.654748 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655198 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655442 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.656425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.656940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.659178 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.659425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.660139 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.660623 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.675563 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.681204 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.838852 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.200811 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.202159 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48aca6bb_748d_4aca_acbf_77a53fe8bfa6.slice/crio-4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3 WatchSource:0}: Error finding container 4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3: Status 404 returned error can't find the container with id 4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3 Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.213628 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.222511 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod522d2104_ef65_44b7_9b68_5e7f9ae771d4.slice/crio-9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea WatchSource:0}: Error finding container 9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea: Status 404 returned error can't find the container with id 9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.233526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.396032 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.407937 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadef1bb6_0564_4002_ad8a_512c2c2736b2.slice/crio-f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647 WatchSource:0}: Error finding container f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647: Status 404 returned error can't find the container with id f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647 Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.010137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.010505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.013892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016370 5039 generic.go:334] "Generic (PLEG): container finished" podID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerID="5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f" exitCode=0 Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016401 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerStarted","Data":"4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.027233 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.027631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.029461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerStarted","Data":"63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.029647 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032043 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032158 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" containerID="cri-o://2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" gracePeriod=30 Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032228 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" containerID="cri-o://6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" gracePeriod=30 Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.060667 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.060640236 podStartE2EDuration="3.060640236s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.05304522 +0000 UTC m=+5482.713726447" watchObservedRunningTime="2026-01-30 14:35:18.060640236 +0000 UTC m=+5482.721321483" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.080719 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.080698319 podStartE2EDuration="3.080698319s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.075496748 +0000 UTC m=+5482.736177975" watchObservedRunningTime="2026-01-30 14:35:18.080698319 +0000 UTC m=+5482.741379566" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.100205 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" podStartSLOduration=3.100188098 podStartE2EDuration="3.100188098s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.092053317 +0000 UTC m=+5482.752734544" watchObservedRunningTime="2026-01-30 14:35:18.100188098 +0000 UTC m=+5482.760869325" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.296133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.681783 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813408 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813502 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813643 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs" (OuterVolumeSpecName: "logs") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813990 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.814046 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.818880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph" (OuterVolumeSpecName: "ceph") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.818944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn" (OuterVolumeSpecName: "kube-api-access-c8xmn") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "kube-api-access-c8xmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.825090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts" (OuterVolumeSpecName: "scripts") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.837764 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.873451 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data" (OuterVolumeSpecName: "config-data") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915356 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915396 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915408 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915417 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915429 5039 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915438 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044358 5039 generic.go:334] "Generic (PLEG): container finished" podID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" exitCode=0 Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045314 5039 generic.go:334] "Generic (PLEG): container finished" podID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" exitCode=143 Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044463 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045559 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.078244 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.083913 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.089714 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.118795 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.127689 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.127771 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.127833 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.127898 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.128129 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.128195 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.129210 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.130735 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.133772 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.138471 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138551 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} err="failed to get container status \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138583 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138823 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.138952 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139066 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} err="failed to get container status \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139082 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139656 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} err="failed to get container status \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139810 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.140172 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} err="failed to get container status \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.225944 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226278 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226463 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226687 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226762 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.328918 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329354 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329482 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329696 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329808 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.334621 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.336698 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.337061 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.337390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.357880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.511872 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.054580 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" containerID="cri-o://d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" gracePeriod=30 Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.054999 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" containerID="cri-o://b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" gracePeriod=30 Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.094134 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:20 crc kubenswrapper[5039]: E0130 14:35:20.094640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.104390 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" path="/var/lib/kubelet/pods/522d2104-ef65-44b7-9b68-5e7f9ae771d4/volumes" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.105248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.657370 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755690 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755804 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755850 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756019 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756066 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756101 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756721 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756978 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs" (OuterVolumeSpecName: "logs") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.762602 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph" (OuterVolumeSpecName: "ceph") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.764486 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9" (OuterVolumeSpecName: "kube-api-access-kvzk9") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "kube-api-access-kvzk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.766726 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts" (OuterVolumeSpecName: "scripts") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.786179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.824481 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data" (OuterVolumeSpecName: "config-data") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858113 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858157 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858170 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858185 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858195 5039 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858203 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858211 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.066463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"ce28ddeb988cc82924a9ba78d3444dc81bfe97b5796f1cb6d7868005df51743e"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.066520 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"51a0b9d9814f664510feca7f841592bee5397f9f75d6f3cce004e60ccf873bc8"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070123 5039 generic.go:334] "Generic (PLEG): container finished" podID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" exitCode=0 Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070157 5039 generic.go:334] "Generic (PLEG): container finished" podID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" exitCode=143 Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070177 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070202 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070212 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070230 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.107293 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.111743 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.139691 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.148332 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.153298 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.153350 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} err="failed to get container status \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.153381 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.154411 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.154457 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} err="failed to get container status \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.154475 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157506 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} err="failed to get container status \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157551 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157969 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158303 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} err="failed to get container status \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.158406 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158423 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.158440 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158448 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158687 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158710 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.159846 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.163721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.166848 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269334 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269470 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269612 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.270156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371397 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371427 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371458 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371594 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.372053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.372089 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375786 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375904 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.384972 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.393772 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.511569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.081472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"b3115a50e5d5f76a09bc526ed2eb9331586ea8777796016437900fd606dd76f1"} Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.107597 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.107576591 podStartE2EDuration="3.107576591s" podCreationTimestamp="2026-01-30 14:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:22.099718648 +0000 UTC m=+5486.760399875" watchObservedRunningTime="2026-01-30 14:35:22.107576591 +0000 UTC m=+5486.768257818" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.112909 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" path="/var/lib/kubelet/pods/adef1bb6-0564-4002-ad8a-512c2c2736b2/volumes" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.119746 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095572 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"a02e4737d2f49533c103755f10e62c3a232cae48bc06e0523b6e4b60a85b02b9"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095905 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"6680122401d9aedf26523f9401ec7a6392845e472245b3e7fd586347d080c273"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"05d351d96331e7c5507485466fbb9fbc1eb327e17a03c70f687724fb253285d4"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.122658 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.122620983 podStartE2EDuration="2.122620983s" podCreationTimestamp="2026-01-30 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:23.113792733 +0000 UTC m=+5487.774473970" watchObservedRunningTime="2026-01-30 14:35:23.122620983 +0000 UTC m=+5487.783302220" Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.683794 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.751585 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.752108 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" containerID="cri-o://b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" gracePeriod=10 Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.130565 5039 generic.go:334] "Generic (PLEG): container finished" podID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerID="b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" exitCode=0 Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.130637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0"} Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.248134 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261298 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261350 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261441 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.267421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd" (OuterVolumeSpecName: "kube-api-access-dlzmd") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "kube-api-access-dlzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.320108 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config" (OuterVolumeSpecName: "config") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.320668 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.321510 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.328036 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363127 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363168 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363182 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363195 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363212 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142755 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972"} Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142798 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142854 5039 scope.go:117] "RemoveContainer" containerID="b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.188438 5039 scope.go:117] "RemoveContainer" containerID="f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.189394 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.197562 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:28 crc kubenswrapper[5039]: I0130 14:35:28.103890 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" path="/var/lib/kubelet/pods/c3b27add-74bb-40a6-a6ba-f2b2b1d23606/volumes" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.512370 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.512434 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.541095 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.553866 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:35:30 crc kubenswrapper[5039]: I0130 14:35:30.168766 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:35:30 crc kubenswrapper[5039]: I0130 14:35:30.169098 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.512785 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.512889 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.539314 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.553253 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.194564 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.195218 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.263033 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.263396 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.302101 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:35:33 crc kubenswrapper[5039]: I0130 14:35:33.094769 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:33 crc kubenswrapper[5039]: E0130 14:35:33.095004 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.209850 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.210162 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.371757 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.483303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.155761 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:40 crc kubenswrapper[5039]: E0130 14:35:40.157606 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.157722 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: E0130 14:35:40.157808 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="init" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.157886 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="init" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.158336 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.159093 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.169748 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.247542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.247640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.249573 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.251079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.253502 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.278220 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349701 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349882 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.350722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.370469 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.451379 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.451433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.452530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.473466 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.486365 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.567368 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.058479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:41 crc kubenswrapper[5039]: W0130 14:35:41.061498 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded011ca6_eae3_4be5_8f3c_49996a5c6d68.slice/crio-5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1 WatchSource:0}: Error finding container 5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1: Status 404 returned error can't find the container with id 5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1 Jan 30 14:35:41 crc kubenswrapper[5039]: W0130 14:35:41.067876 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37b01eba_76d8_483f_a005_d64c7ba4fdbf.slice/crio-c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507 WatchSource:0}: Error finding container c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507: Status 404 returned error can't find the container with id c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507 Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.070722 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.282243 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerStarted","Data":"c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.282652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerStarted","Data":"5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.287124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerStarted","Data":"7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.287174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerStarted","Data":"c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.305724 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-deef-account-create-update-pgfj6" podStartSLOduration=1.30570097 podStartE2EDuration="1.30570097s" podCreationTimestamp="2026-01-30 14:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:41.296278164 +0000 UTC m=+5505.956959411" watchObservedRunningTime="2026-01-30 14:35:41.30570097 +0000 UTC m=+5505.966382197" Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.314734 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-665mk" podStartSLOduration=1.314712904 podStartE2EDuration="1.314712904s" podCreationTimestamp="2026-01-30 14:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:41.312321219 +0000 UTC m=+5505.973002456" watchObservedRunningTime="2026-01-30 14:35:41.314712904 +0000 UTC m=+5505.975394131" Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.296999 5039 generic.go:334] "Generic (PLEG): container finished" podID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerID="c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb" exitCode=0 Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.297158 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerDied","Data":"c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb"} Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.300734 5039 generic.go:334] "Generic (PLEG): container finished" podID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerID="7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc" exitCode=0 Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.300778 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerDied","Data":"7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc"} Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.707277 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.717517 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808620 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37b01eba-76d8-483f-a005-d64c7ba4fdbf" (UID: "37b01eba-76d8-483f-a005-d64c7ba4fdbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809572 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809946 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.810333 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed011ca6-eae3-4be5-8f3c-49996a5c6d68" (UID: "ed011ca6-eae3-4be5-8f3c-49996a5c6d68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.813626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8" (OuterVolumeSpecName: "kube-api-access-zgkz8") pod "37b01eba-76d8-483f-a005-d64c7ba4fdbf" (UID: "37b01eba-76d8-483f-a005-d64c7ba4fdbf"). InnerVolumeSpecName "kube-api-access-zgkz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.813663 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb" (OuterVolumeSpecName: "kube-api-access-hc6xb") pod "ed011ca6-eae3-4be5-8f3c-49996a5c6d68" (UID: "ed011ca6-eae3-4be5-8f3c-49996a5c6d68"). InnerVolumeSpecName "kube-api-access-hc6xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911757 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911807 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911816 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317154 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerDied","Data":"5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1"} Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317192 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317762 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319413 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerDied","Data":"c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507"} Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319450 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319509 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.094188 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.095634 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.567068 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.568005 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568038 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.568061 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568068 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568238 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568259 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.569202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.578890 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.580613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d5vhk" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583608 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583665 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.592431 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.609107 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659445 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659501 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659524 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659556 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659670 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659702 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659760 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761479 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761561 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761584 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761633 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761686 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.763123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.766318 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.766653 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.768336 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.780702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.781177 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.897055 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.917841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.254137 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.346188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerStarted","Data":"3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005"} Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.348298 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:46 crc kubenswrapper[5039]: W0130 14:35:46.350254 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac94945_eac9_4837_ad5a_71d9931c547d.slice/crio-a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3 WatchSource:0}: Error finding container a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3: Status 404 returned error can't find the container with id a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3 Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.563581 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.566318 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.581063 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674171 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674271 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.775915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776668 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776711 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.799302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.918653 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.356479 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerStarted","Data":"7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366459 5039 generic.go:334] "Generic (PLEG): container finished" podID="fac94945-eac9-4837-ad5a-71d9931c547d" containerID="d37047349fcc9c3dadef9f110e35b34341e7f5374d78a74a158d6fe5c4943e0c" exitCode=0 Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366535 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerDied","Data":"d37047349fcc9c3dadef9f110e35b34341e7f5374d78a74a158d6fe5c4943e0c"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366576 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerStarted","Data":"a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.417935 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8zmlz" podStartSLOduration=2.417917223 podStartE2EDuration="2.417917223s" podCreationTimestamp="2026-01-30 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:47.391350993 +0000 UTC m=+5512.052032230" watchObservedRunningTime="2026-01-30 14:35:47.417917223 +0000 UTC m=+5512.078598470" Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.605066 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.375418 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerStarted","Data":"ec9b78e8553cb6ff167a2d9b6af2ca408d3eb381596a6ed505d75f13e003945b"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.376527 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378649 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" exitCode=0 Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378735 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378761 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerStarted","Data":"db69cef6ae201da7065091778067dd429c823e60ca994042ac18e757bc8d4222"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.380723 5039 generic.go:334] "Generic (PLEG): container finished" podID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerID="7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b" exitCode=0 Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.380788 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerDied","Data":"7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.403818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" podStartSLOduration=3.403792134 podStartE2EDuration="3.403792134s" podCreationTimestamp="2026-01-30 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:48.396928198 +0000 UTC m=+5513.057609435" watchObservedRunningTime="2026-01-30 14:35:48.403792134 +0000 UTC m=+5513.064473351" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.722386 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836438 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836543 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836579 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836671 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836736 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.838234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs" (OuterVolumeSpecName: "logs") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.842121 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts" (OuterVolumeSpecName: "scripts") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.848309 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7" (OuterVolumeSpecName: "kube-api-access-th2t7") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "kube-api-access-th2t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.860713 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data" (OuterVolumeSpecName: "config-data") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.862911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937863 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937893 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937902 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937911 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937918 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.397996 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerDied","Data":"3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005"} Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.398678 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.398042 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.401673 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" exitCode=0 Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.401702 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724"} Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.487703 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:50 crc kubenswrapper[5039]: E0130 14:35:50.488136 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.488154 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.488350 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.489206 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491183 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d5vhk" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491312 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491648 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.504167 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651780 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753254 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753422 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753995 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.759071 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.759081 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.763719 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.776443 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.836333 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.289898 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:51 crc kubenswrapper[5039]: W0130 14:35:51.293155 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4fa210a_8256_4fb5_9985_3d09a3495072.slice/crio-74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83 WatchSource:0}: Error finding container 74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83: Status 404 returned error can't find the container with id 74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83 Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.414959 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83"} Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.421132 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerStarted","Data":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.455733 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9hgl" podStartSLOduration=3.05659625 podStartE2EDuration="5.455713353s" podCreationTimestamp="2026-01-30 14:35:46 +0000 UTC" firstStartedPulling="2026-01-30 14:35:48.380505872 +0000 UTC m=+5513.041187099" lastFinishedPulling="2026-01-30 14:35:50.779622975 +0000 UTC m=+5515.440304202" observedRunningTime="2026-01-30 14:35:51.443507292 +0000 UTC m=+5516.104188519" watchObservedRunningTime="2026-01-30 14:35:51.455713353 +0000 UTC m=+5516.116394570" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"23e3c367990deb90f4cd338f2e7402addfe9e9f9ac5aa0ea432bfa8875814c9c"} Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"0841df6c9f44c746e39ecdef60cd1a88cab16d2ecab872ea9590377390d8f31a"} Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.453356 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5d5974d948-2v2hn" podStartSLOduration=2.453336002 podStartE2EDuration="2.453336002s" podCreationTimestamp="2026-01-30 14:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:52.452008296 +0000 UTC m=+5517.112689523" watchObservedRunningTime="2026-01-30 14:35:52.453336002 +0000 UTC m=+5517.114017229" Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.903993 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.970161 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.970639 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" containerID="cri-o://63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" gracePeriod=10 Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.469096 5039 generic.go:334] "Generic (PLEG): container finished" podID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerID="63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" exitCode=0 Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.469180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0"} Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.906089 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.920222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.920269 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.971349 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069276 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069507 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.074631 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg" (OuterVolumeSpecName: "kube-api-access-n74jg") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "kube-api-access-n74jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.110634 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config" (OuterVolumeSpecName: "config") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.111150 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.115684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.117743 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176899 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176940 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176952 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176969 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176986 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3"} Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479503 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479546 5039 scope.go:117] "RemoveContainer" containerID="63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.500017 5039 scope.go:117] "RemoveContainer" containerID="5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.516191 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.524096 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.527923 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.579407 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:58 crc kubenswrapper[5039]: I0130 14:35:58.103351 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" path="/var/lib/kubelet/pods/48aca6bb-748d-4aca-acbf-77a53fe8bfa6/volumes" Jan 30 14:35:59 crc kubenswrapper[5039]: I0130 14:35:59.501420 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9hgl" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" containerID="cri-o://4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" gracePeriod=2 Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.011755 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.093340 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.121484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.121781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.122067 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.122729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities" (OuterVolumeSpecName: "utilities") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.136751 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx" (OuterVolumeSpecName: "kube-api-access-hfmdx") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "kube-api-access-hfmdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.176569 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226591 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226655 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512738 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" exitCode=0 Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512819 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"db69cef6ae201da7065091778067dd429c823e60ca994042ac18e757bc8d4222"} Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512838 5039 scope.go:117] "RemoveContainer" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512855 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.535412 5039 scope.go:117] "RemoveContainer" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.548713 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.565923 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.566431 5039 scope.go:117] "RemoveContainer" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603145 5039 scope.go:117] "RemoveContainer" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.603658 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": container with ID starting with 4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de not found: ID does not exist" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603774 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} err="failed to get container status \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": rpc error: code = NotFound desc = could not find container \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": container with ID starting with 4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de not found: ID does not exist" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603856 5039 scope.go:117] "RemoveContainer" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.604312 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": container with ID starting with a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724 not found: ID does not exist" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604358 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724"} err="failed to get container status \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": rpc error: code = NotFound desc = could not find container \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": container with ID starting with a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724 not found: ID does not exist" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604390 5039 scope.go:117] "RemoveContainer" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.604732 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": container with ID starting with 2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06 not found: ID does not exist" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604768 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06"} err="failed to get container status \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": rpc error: code = NotFound desc = could not find container \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": container with ID starting with 2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06 not found: ID does not exist" Jan 30 14:36:02 crc kubenswrapper[5039]: I0130 14:36:02.104298 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" path="/var/lib/kubelet/pods/9da44e6e-4dc4-4e63-98f5-fc5713234ea3/volumes" Jan 30 14:36:09 crc kubenswrapper[5039]: I0130 14:36:09.434939 5039 scope.go:117] "RemoveContainer" containerID="c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae" Jan 30 14:36:09 crc kubenswrapper[5039]: I0130 14:36:09.467914 5039 scope.go:117] "RemoveContainer" containerID="6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762" Jan 30 14:36:12 crc kubenswrapper[5039]: I0130 14:36:12.093863 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:12 crc kubenswrapper[5039]: E0130 14:36:12.094542 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:22 crc kubenswrapper[5039]: I0130 14:36:22.130659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:36:23 crc kubenswrapper[5039]: I0130 14:36:23.171030 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:36:26 crc kubenswrapper[5039]: I0130 14:36:26.106206 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:26 crc kubenswrapper[5039]: E0130 14:36:26.107111 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:41 crc kubenswrapper[5039]: I0130 14:36:41.093943 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:41 crc kubenswrapper[5039]: I0130 14:36:41.850609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} Jan 30 14:37:09 crc kubenswrapper[5039]: I0130 14:37:09.626345 5039 scope.go:117] "RemoveContainer" containerID="8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.517981 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519217 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519237 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519275 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519286 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-utilities" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519295 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-utilities" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519321 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-content" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519344 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-content" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519365 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="init" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519374 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="init" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519623 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519646 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.520966 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525392 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bm2kn"/"openshift-service-ca.crt" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525652 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bm2kn"/"kube-root-ca.crt" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525912 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bm2kn"/"default-dockercfg-cqf5m" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.545950 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.579376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.579571 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.681337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.681528 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.682005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.705080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.848767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:29 crc kubenswrapper[5039]: I0130 14:37:29.363728 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:29 crc kubenswrapper[5039]: I0130 14:37:29.368154 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:37:30 crc kubenswrapper[5039]: I0130 14:37:30.238627 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"41922182bd0b0479eda4f292c214e5cf614b65589949cfc9e4cce97885916907"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.292525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.293097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.314622 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bm2kn/must-gather-2252c" podStartSLOduration=2.32600973 podStartE2EDuration="8.314600571s" podCreationTimestamp="2026-01-30 14:37:28 +0000 UTC" firstStartedPulling="2026-01-30 14:37:29.367820891 +0000 UTC m=+5614.028502118" lastFinishedPulling="2026-01-30 14:37:35.356411732 +0000 UTC m=+5620.017092959" observedRunningTime="2026-01-30 14:37:36.306850781 +0000 UTC m=+5620.967532028" watchObservedRunningTime="2026-01-30 14:37:36.314600571 +0000 UTC m=+5620.975281798" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.414555 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.416372 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.459971 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.460281 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562224 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562337 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.583282 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.737078 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: W0130 14:37:38.763722 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7e055f_0c54_49d6_aa3a_1f8a07abfd09.slice/crio-35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184 WatchSource:0}: Error finding container 35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184: Status 404 returned error can't find the container with id 35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184 Jan 30 14:37:39 crc kubenswrapper[5039]: I0130 14:37:39.348450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerStarted","Data":"35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184"} Jan 30 14:37:51 crc kubenswrapper[5039]: I0130 14:37:51.480839 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerStarted","Data":"ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f"} Jan 30 14:37:51 crc kubenswrapper[5039]: I0130 14:37:51.504982 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" podStartSLOduration=1.9690816930000001 podStartE2EDuration="13.504957526s" podCreationTimestamp="2026-01-30 14:37:38 +0000 UTC" firstStartedPulling="2026-01-30 14:37:38.76595163 +0000 UTC m=+5623.426632857" lastFinishedPulling="2026-01-30 14:37:50.301827463 +0000 UTC m=+5634.962508690" observedRunningTime="2026-01-30 14:37:51.500646389 +0000 UTC m=+5636.161327626" watchObservedRunningTime="2026-01-30 14:37:51.504957526 +0000 UTC m=+5636.165638773" Jan 30 14:38:12 crc kubenswrapper[5039]: I0130 14:38:12.654919 5039 generic.go:334] "Generic (PLEG): container finished" podID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerID="ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f" exitCode=0 Jan 30 14:38:12 crc kubenswrapper[5039]: I0130 14:38:12.655017 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerDied","Data":"ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f"} Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.762272 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.798169 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.807633 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.918885 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919340 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host" (OuterVolumeSpecName: "host") pod "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" (UID: "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919670 5039 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.936293 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98" (OuterVolumeSpecName: "kube-api-access-z7g98") pod "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" (UID: "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09"). InnerVolumeSpecName "kube-api-access-z7g98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.021589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.134924 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" path="/var/lib/kubelet/pods/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09/volumes" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.674277 5039 scope.go:117] "RemoveContainer" containerID="ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.674347 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.135709 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:15 crc kubenswrapper[5039]: E0130 14:38:15.136506 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.136523 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.136741 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.137449 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.245935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.247767 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.350970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.351382 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.351216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.369326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.457478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.686203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" event={"ID":"30be3428-492e-4dda-a45f-76ed707ea4c2","Type":"ContainerStarted","Data":"e47695cd5bdffeefcb4bc43753068deef020a9c6edb13284b4afc890817d94a9"} Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.696513 5039 generic.go:334] "Generic (PLEG): container finished" podID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerID="98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5" exitCode=1 Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.696562 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" event={"ID":"30be3428-492e-4dda-a45f-76ed707ea4c2","Type":"ContainerDied","Data":"98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5"} Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.740743 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.751353 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.783370 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.893548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"30be3428-492e-4dda-a45f-76ed707ea4c2\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.893662 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host" (OuterVolumeSpecName: "host") pod "30be3428-492e-4dda-a45f-76ed707ea4c2" (UID: "30be3428-492e-4dda-a45f-76ed707ea4c2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.894074 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"30be3428-492e-4dda-a45f-76ed707ea4c2\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.894550 5039 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.905205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj" (OuterVolumeSpecName: "kube-api-access-4dpzj") pod "30be3428-492e-4dda-a45f-76ed707ea4c2" (UID: "30be3428-492e-4dda-a45f-76ed707ea4c2"). InnerVolumeSpecName "kube-api-access-4dpzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.997222 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.107712 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" path="/var/lib/kubelet/pods/30be3428-492e-4dda-a45f-76ed707ea4c2/volumes" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.711986 5039 scope.go:117] "RemoveContainer" containerID="98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.712233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.753487 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf9dd66-4rnjv_a6116ea0-1d69-4c2c-b3d1-20480d785187/barbican-api/0.log" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.889073 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf9dd66-4rnjv_a6116ea0-1d69-4c2c-b3d1-20480d785187/barbican-api-log/0.log" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.949870 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-c014-account-create-update-px7xb_f140476b-d9d4-4ca6-bac1-d4f91a64c18b/mariadb-account-create-update/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.082189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-create-75gqg_c11ff9c9-2927-49d7-a52b-995f63c75e72/mariadb-database-create/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.143195 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-sync-ttzhq_5c1e26bd-8401-41c3-b195-93755cd10148/barbican-db-sync/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.306509 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c6556cc4-gwjwr_94903821-743c-4c2b-913c-27ef1467fe0a/barbican-keystone-listener/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.332369 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c6556cc4-gwjwr_94903821-743c-4c2b-913c-27ef1467fe0a/barbican-keystone-listener-log/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.487671 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c47676b89-c2bdw_a2dedf26-e8a7-43d7-9113-844ed4ace24f/barbican-worker-log/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.520329 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c47676b89-c2bdw_a2dedf26-e8a7-43d7-9113-844ed4ace24f/barbican-worker/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.679158 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/init/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.829645 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/dnsmasq-dns/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.857122 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/init/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.906184 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-200a-account-create-update-8xkrb_f58690d3-b736-4e20-973e-dc1a555592a1/mariadb-account-create-update/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.055471 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-5d2vz_de9c141b-39af-4717-91c7-32de6df6ca1d/mariadb-database-create/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.114223 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-cl4vn_00da7584-6573-4dac-bfd1-ea7c53ad5b93/glance-db-sync/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.256599 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e03c189-6d6b-4b11-8de3-0802c037a207/glance-httpd/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.297961 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e03c189-6d6b-4b11-8de3-0802c037a207/glance-log/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.471788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f/glance-log/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.517237 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f/glance-httpd/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.611178 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f95777885-dfppg_cf6c7271-2040-4fdf-9920-6842976f8ebc/keystone-api/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.719129 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c90-account-create-update-rcrpm_186c0ea5-7e75-40a9-8304-487243cd940f/mariadb-account-create-update/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.797724 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-rbkmw_7902ea8d-9313-4ce7-8813-9b758308b6e5/keystone-bootstrap/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.888242 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-lmw95_b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6/mariadb-database-create/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.003144 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-qshch_dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec/keystone-db-sync/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.074958 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_d0ef5c71-7162-4911-a514-7be99e7a5cc0/adoption/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.293611 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55d685cc65-wskfp_03bff807-c195-4e08-8858-545f15d0b179/neutron-api/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.471761 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55d685cc65-wskfp_03bff807-c195-4e08-8858-545f15d0b179/neutron-httpd/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.566842 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb18-account-create-update-kkffq_9c46ecdf-d569-4ebc-8963-909b6e460e18/mariadb-account-create-update/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.748204 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-f8pgs_babc668e-cf9b-4d6c-8a45-f79e141cfc0e/mariadb-database-create/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.947801 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-8bsx9_ca210a91-180c-4a6a-8334-1d294092b8a3/neutron-db-sync/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.117117 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.212738 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_54eb6d65-3d1f-4965-9438-a1c1c386747f/memcached/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.429629 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.448686 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/galera/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.575703 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.867914 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/galera/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.879340 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5f9710bf-722a-4504-b0c6-3ea395807a75/openstackclient/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.882824 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/mysql-bootstrap/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.073885 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_2fa144db-c324-4fc0-9076-a6704fc1b00b/adoption/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.132831 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b2601f1-8fcd-4cf8-8e60-9c95785f395b/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.166923 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b2601f1-8fcd-4cf8-8e60-9c95785f395b/ovn-northd/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.310854 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b5493e8-291c-4677-902a-89649a59dc48/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.328888 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b5493e8-291c-4677-902a-89649a59dc48/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.370025 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5db342ca-88a0-41e4-9cb8-407be8357dd0/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.475832 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5db342ca-88a0-41e4-9cb8-407be8357dd0/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.542394 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_1fc46623-afd6-4b9d-bf3d-79700d1ee972/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.548000 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_1fc46623-afd6-4b9d-bf3d-79700d1ee972/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.653245 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7065704-60d1-44b1-a6a6-f23a25d20a3f/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.753841 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7065704-60d1-44b1-a6a6-f23a25d20a3f/ovsdbserver-sb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.796140 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_286a05d9-3f8e-4942-ad66-0a674aa88114/ovsdbserver-sb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.826508 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_286a05d9-3f8e-4942-ad66-0a674aa88114/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.964411 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d163aa91-5efd-4b7a-94eb-c9b4f26fba7b/openstack-network-exporter/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.011863 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d163aa91-5efd-4b7a-94eb-c9b4f26fba7b/ovsdbserver-sb/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.049332 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d5974d948-2v2hn_b4fa210a-8256-4fb5-9985-3d09a3495072/placement-api/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.128412 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d5974d948-2v2hn_b4fa210a-8256-4fb5-9985-3d09a3495072/placement-log/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.188846 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-665mk_37b01eba-76d8-483f-a005-d64c7ba4fdbf/mariadb-database-create/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.449976 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-8zmlz_5bc0ac40-f14d-45cb-b7de-87599e7cce2c/placement-db-sync/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.570116 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-deef-account-create-update-pgfj6_ed011ca6-eae3-4be5-8f3c-49996a5c6d68/mariadb-account-create-update/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.586335 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.718076 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.760484 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/rabbitmq/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.807002 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.960968 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.969673 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/rabbitmq/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.387783 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-566c8844c5-7b7vn_e0e4cf6d-c270-4781-b68c-be66be87eda0/manager/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.472201 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.661573 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.670160 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.683963 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.847577 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.869712 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/extract/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.897787 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.065161 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f9bbdc844-hfv9l_46f5b983-ce89-42e5-8fc0-7145badf07df/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.111622 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-zc7fk_dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.292314 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54985f5875-tn8jh_8ad0072a-71a8-4fd8-9f4d-39ffd8a63530/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.332496 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784f59d4f4-mgfpl_119bb853-2462-447e-bedc-54a2d5e2ba7f/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.441967 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-gb8b7_a7002b43-9266-4930-8baa-d60085738bbf/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.638794 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-8vmk2_f88d8b4c-e64a-46de-8566-c17112f9379d/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.871289 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c9d56f9bd-l7jpj_393972fe-41f4-41b3-b5e9-c2183a2a506c/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.903030 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-xg48r_a0e32430-f729-40dc-a6a9-307f01744381/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.955148 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-74954f9f78-2rz8j_be0f8b45-595e-434a-afd7-bc054252c589/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.123908 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-ncf2p_a84f3cb3-ab4e-4780-bfac-295411bfca5f/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.198620 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6cfc4f6754-b4d54_5b341b5c-d0a9-4e32-bc5a-7e669840a358/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.373554 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-67f5956bc9-k6k9g_d2b8a86d-d798-4591-8f13-70f20fbe944d/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.414593 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-n5fbd_aea15f55-ce7e-4253-9a45-a6a9657ebf04/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.560261 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57_bb900788-5fb4-4e83-8eec-f99dba093c60/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.703977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb4fb98bb-fglw8_da15d311-1be3-49c8-9283-5f4815b0a42d/operator/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.894101 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-np244_9fc67884-3169-4fc2-98e9-1a3a274f9f02/registry-server/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.089113 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qf8zq_4240d443-bebd-4831-aaf2-0548c4d30a60/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.259380 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-sg45v_7792d72c-9fec-4de1-aaff-90764148b8d1/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.417968 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-78q8w_d523ce30-8e42-407b-bb30-2e8aedb76c0c/operator/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.545660 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d4f9d9c9b-j5l2r_4af84b30-6340-4e2a-b4fc-79268b9cb491/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.762273 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cd99594-2gs8r_030095cc-213a-4228-a2d5-62e91816f44e/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.875426 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-zxtd4_35170745-facc-414b-9c48-649af86aeeb6/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.988462 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-557bcbc6d9-5qlfl_cc0a21f9-046e-450a-bed9-4de7483415f3/manager/0.log" Jan 30 14:38:59 crc kubenswrapper[5039]: I0130 14:38:59.006928 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5bf648c946-vwwqt_b74de1a1-6d53-416d-a626-3307e43fb1a9/manager/0.log" Jan 30 14:39:07 crc kubenswrapper[5039]: I0130 14:39:07.743315 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:39:07 crc kubenswrapper[5039]: I0130 14:39:07.743951 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.246770 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gxpwf_a391a542-f6cf-4b97-b69b-aa27a4942896/control-plane-machine-set-operator/0.log" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.409938 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sdf86_42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21/kube-rbac-proxy/0.log" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.442967 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sdf86_42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21/machine-api-operator/0.log" Jan 30 14:39:27 crc kubenswrapper[5039]: I0130 14:39:27.921639 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-r4tn9_2ec608ca-f1e5-4db3-9c30-c4eda5016097/cert-manager-controller/0.log" Jan 30 14:39:28 crc kubenswrapper[5039]: I0130 14:39:28.116508 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-sthhd_99b483cf-ff93-4073-a80d-b5da5ebfd409/cert-manager-cainjector/0.log" Jan 30 14:39:28 crc kubenswrapper[5039]: I0130 14:39:28.219801 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hcjvz_faf4f279-399b-4958-9a67-3a94b650bd98/cert-manager-webhook/0.log" Jan 30 14:39:37 crc kubenswrapper[5039]: I0130 14:39:37.742787 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:39:37 crc kubenswrapper[5039]: I0130 14:39:37.743367 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.580373 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nb88j_5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9/nmstate-console-plugin/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.746821 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5ccgw_98342032-bce0-478a-b809-b9af50125cbf/nmstate-handler/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.802679 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mj7zw_05349ae8-13b7-45d0-beb2-5a14eeae995f/kube-rbac-proxy/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.917540 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mj7zw_05349ae8-13b7-45d0-beb2-5a14eeae995f/nmstate-metrics/0.log" Jan 30 14:39:41 crc kubenswrapper[5039]: I0130 14:39:41.005653 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-b8fk6_c4341387-fba2-41e9-a279-5c1071b11a2d/nmstate-operator/0.log" Jan 30 14:39:41 crc kubenswrapper[5039]: I0130 14:39:41.130702 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-8jq59_b8b725bf-ea88-45d2-a03b-94c281cc3842/nmstate-webhook/0.log" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.616818 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-msg56_18c97a9f-5ac7-4319-8909-600474d0aabc/kube-rbac-proxy/0.log" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742091 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742150 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742192 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742833 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742883 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" gracePeriod=600 Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.972443 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.080798 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-msg56_18c97a9f-5ac7-4319-8909-600474d0aabc/controller/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.205479 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.208534 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.214788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.330183 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.568899 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.579060 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585686 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" exitCode=0 Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585726 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585814 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.591088 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.603977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.992539 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.997635 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/controller/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.021977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.036164 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.236942 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/frr-metrics/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.247277 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/kube-rbac-proxy/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.277452 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/kube-rbac-proxy-frr/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.494793 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/reloader/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.581840 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6n4dv_1fe909fe-e213-4165-83d5-c84a38f84047/frr-k8s-webhook-server/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.768583 5039 scope.go:117] "RemoveContainer" containerID="6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.776630 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-775f575c6c-2krlm_34ada733-5dd5-4176-a550-55b719e60a27/manager/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.806063 5039 scope.go:117] "RemoveContainer" containerID="c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.016369 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59964d97f8-vdp6d_9615eef8-e393-477f-b76f-d8219f085358/webhook-server/0.log" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.078631 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g8kqw_a2e6599e-bad5-4e41-a6ef-312131617cc8/kube-rbac-proxy/0.log" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.935739 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g8kqw_a2e6599e-bad5-4e41-a6ef-312131617cc8/speaker/0.log" Jan 30 14:40:11 crc kubenswrapper[5039]: I0130 14:40:11.092331 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/frr/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.662631 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.842650 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.850213 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.911153 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.086897 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/extract/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.089237 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.113522 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.248066 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.409355 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.411065 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.428984 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.586930 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.587549 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/extract/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.661226 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.772944 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.002088 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.005394 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.039969 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.172134 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.200657 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/extract/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.226810 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.373958 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.537781 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.566107 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.601507 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.769000 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.770252 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.978788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.226925 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.268053 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.351540 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.514703 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.517177 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.547391 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/registry-server/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.747150 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jfw2h_76c852b6-fbf0-493f-b157-06882e5f306f/marketplace-operator/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.960954 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.152589 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.198829 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.201983 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.531123 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/registry-server/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.558318 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.558346 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.753738 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/registry-server/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.811339 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.912189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.938584 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.953558 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.149989 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.177813 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.920913 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/registry-server/0.log" Jan 30 14:40:49 crc kubenswrapper[5039]: E0130 14:40:49.190811 5039 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:34260->38.102.83.188:34017: write tcp 38.102.83.188:34260->38.102.83.188:34017: write: broken pipe Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.223633 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:55 crc kubenswrapper[5039]: E0130 14:40:55.224670 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.224688 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.224880 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.227309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.239573 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339162 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339557 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.441899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.441952 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.442073 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.442660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.443237 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.472341 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.555702 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.042312 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977081 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" exitCode=0 Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977127 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8"} Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977157 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerStarted","Data":"22746cc401c1c7b9639ee679571037c34d1b895f5fd5bc5f322571380bbd38f2"} Jan 30 14:40:57 crc kubenswrapper[5039]: I0130 14:40:57.986950 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" exitCode=0 Jan 30 14:40:57 crc kubenswrapper[5039]: I0130 14:40:57.987044 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e"} Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.056581 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.075091 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.090779 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.106180 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" path="/var/lib/kubelet/pods/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6/volumes" Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.107083 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.997097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerStarted","Data":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} Jan 30 14:40:59 crc kubenswrapper[5039]: I0130 14:40:59.023748 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2gf2d" podStartSLOduration=2.608104554 podStartE2EDuration="4.023722502s" podCreationTimestamp="2026-01-30 14:40:55 +0000 UTC" firstStartedPulling="2026-01-30 14:40:56.979186573 +0000 UTC m=+5821.639867800" lastFinishedPulling="2026-01-30 14:40:58.394804521 +0000 UTC m=+5823.055485748" observedRunningTime="2026-01-30 14:40:59.020000761 +0000 UTC m=+5823.680681988" watchObservedRunningTime="2026-01-30 14:40:59.023722502 +0000 UTC m=+5823.684403729" Jan 30 14:41:00 crc kubenswrapper[5039]: I0130 14:41:00.103407 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" path="/var/lib/kubelet/pods/186c0ea5-7e75-40a9-8304-487243cd940f/volumes" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.041433 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.055501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.556986 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.557355 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.605493 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.103702 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" path="/var/lib/kubelet/pods/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec/volumes" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.104273 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.158714 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:08 crc kubenswrapper[5039]: I0130 14:41:08.078825 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2gf2d" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" containerID="cri-o://c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" gracePeriod=2 Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.055288 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100212 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" exitCode=0 Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.101527 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"22746cc401c1c7b9639ee679571037c34d1b895f5fd5bc5f322571380bbd38f2"} Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.101604 5039 scope.go:117] "RemoveContainer" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100640 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132245 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132377 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.134989 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities" (OuterVolumeSpecName: "utilities") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.142780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf" (OuterVolumeSpecName: "kube-api-access-rjfdf") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "kube-api-access-rjfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.146909 5039 scope.go:117] "RemoveContainer" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.157148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.183252 5039 scope.go:117] "RemoveContainer" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.215839 5039 scope.go:117] "RemoveContainer" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.216453 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": container with ID starting with c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b not found: ID does not exist" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.216501 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} err="failed to get container status \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": rpc error: code = NotFound desc = could not find container \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": container with ID starting with c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.216527 5039 scope.go:117] "RemoveContainer" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.217004 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": container with ID starting with d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e not found: ID does not exist" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217059 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e"} err="failed to get container status \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": rpc error: code = NotFound desc = could not find container \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": container with ID starting with d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217083 5039 scope.go:117] "RemoveContainer" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.217458 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": container with ID starting with 25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8 not found: ID does not exist" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217482 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8"} err="failed to get container status \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": rpc error: code = NotFound desc = could not find container \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": container with ID starting with 25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8 not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235028 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235069 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235078 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.447074 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.455889 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.882312 5039 scope.go:117] "RemoveContainer" containerID="7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.909267 5039 scope.go:117] "RemoveContainer" containerID="1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.950745 5039 scope.go:117] "RemoveContainer" containerID="53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50" Jan 30 14:41:10 crc kubenswrapper[5039]: I0130 14:41:10.106279 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" path="/var/lib/kubelet/pods/4ba11ed5-1df7-48ca-9d03-87b973c6f32a/volumes" Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.041581 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.051516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.104583 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" path="/var/lib/kubelet/pods/7902ea8d-9313-4ce7-8813-9b758308b6e5/volumes" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.989208 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990249 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990266 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990293 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-content" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990301 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-content" Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990322 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-utilities" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990330 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-utilities" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990531 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.992026 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.000382 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017502 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017611 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017737 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.119895 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120039 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120111 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120838 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.142612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.311948 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.814001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.266814 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674" exitCode=0 Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.266910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674"} Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.267157 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"062f3669d93fa14898011b95a78b7045ce73ddf1ba1da076a273bedce4e48cef"} Jan 30 14:41:29 crc kubenswrapper[5039]: I0130 14:41:29.286133 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35"} Jan 30 14:41:30 crc kubenswrapper[5039]: I0130 14:41:30.296496 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35" exitCode=0 Jan 30 14:41:30 crc kubenswrapper[5039]: I0130 14:41:30.296546 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35"} Jan 30 14:41:31 crc kubenswrapper[5039]: I0130 14:41:31.306216 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9"} Jan 30 14:41:31 crc kubenswrapper[5039]: I0130 14:41:31.328856 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sd2r" podStartSLOduration=2.90767398 podStartE2EDuration="5.328838294s" podCreationTimestamp="2026-01-30 14:41:26 +0000 UTC" firstStartedPulling="2026-01-30 14:41:28.268967524 +0000 UTC m=+5852.929648751" lastFinishedPulling="2026-01-30 14:41:30.690131838 +0000 UTC m=+5855.350813065" observedRunningTime="2026-01-30 14:41:31.324694982 +0000 UTC m=+5855.985376219" watchObservedRunningTime="2026-01-30 14:41:31.328838294 +0000 UTC m=+5855.989519521" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.312994 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.313764 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.382219 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.534462 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.648150 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:39 crc kubenswrapper[5039]: I0130 14:41:39.367265 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sd2r" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" containerID="cri-o://8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" gracePeriod=2 Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.377251 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" exitCode=0 Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.377353 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9"} Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.943300 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074027 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities" (OuterVolumeSpecName: "utilities") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.078904 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r" (OuterVolumeSpecName: "kube-api-access-vkt4r") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "kube-api-access-vkt4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.176211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.176255 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.198880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.277746 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"062f3669d93fa14898011b95a78b7045ce73ddf1ba1da076a273bedce4e48cef"} Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389622 5039 scope.go:117] "RemoveContainer" containerID="8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389750 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.416434 5039 scope.go:117] "RemoveContainer" containerID="238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.473365 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.482381 5039 scope.go:117] "RemoveContainer" containerID="d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.485862 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:42 crc kubenswrapper[5039]: I0130 14:41:42.106079 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" path="/var/lib/kubelet/pods/97796c73-e813-4e98-9b09-d4165fc8cad8/volumes" Jan 30 14:42:10 crc kubenswrapper[5039]: I0130 14:42:10.054163 5039 scope.go:117] "RemoveContainer" containerID="c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517" Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.677918 5039 generic.go:334] "Generic (PLEG): container finished" podID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" exitCode=0 Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.678477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerDied","Data":"787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b"} Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.679169 5039 scope.go:117] "RemoveContainer" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.866081 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/gather/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.535567 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.536458 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bm2kn/must-gather-2252c" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" containerID="cri-o://5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" gracePeriod=2 Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.548199 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.754321 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.754771 5039 generic.go:334] "Generic (PLEG): container finished" podID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerID="5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" exitCode=143 Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.974841 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.975209 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.042202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"247caddf-72ba-458a-ad59-05b3ecd3c493\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.042487 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"247caddf-72ba-458a-ad59-05b3ecd3c493\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.050032 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm" (OuterVolumeSpecName: "kube-api-access-mn2nm") pod "247caddf-72ba-458a-ad59-05b3ecd3c493" (UID: "247caddf-72ba-458a-ad59-05b3ecd3c493"). InnerVolumeSpecName "kube-api-access-mn2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.144302 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.211674 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "247caddf-72ba-458a-ad59-05b3ecd3c493" (UID: "247caddf-72ba-458a-ad59-05b3ecd3c493"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.249346 5039 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766153 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766787 5039 scope.go:117] "RemoveContainer" containerID="5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766837 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.801655 5039 scope.go:117] "RemoveContainer" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" Jan 30 14:42:26 crc kubenswrapper[5039]: I0130 14:42:26.103116 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" path="/var/lib/kubelet/pods/247caddf-72ba-458a-ad59-05b3ecd3c493/volumes" Jan 30 14:42:37 crc kubenswrapper[5039]: I0130 14:42:37.742666 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:42:37 crc kubenswrapper[5039]: I0130 14:42:37.743419 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.629644 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630508 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-utilities" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630524 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-utilities" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630542 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630548 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630556 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-content" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630563 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-content" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630577 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630582 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630606 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630611 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630773 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630786 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.631935 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.674615 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803668 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803849 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803895 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907694 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907740 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907850 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.908306 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.908345 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.927427 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:02 crc kubenswrapper[5039]: I0130 14:43:02.016653 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:02 crc kubenswrapper[5039]: I0130 14:43:02.310211 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094270 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" exitCode=0 Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094560 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439"} Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"c6159c660333230adc448945ed5d2a8033b055bddb4fd1228cd05a1bf547f1d5"} Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.096354 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:43:04 crc kubenswrapper[5039]: I0130 14:43:04.107538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} Jan 30 14:43:05 crc kubenswrapper[5039]: I0130 14:43:05.121743 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" exitCode=0 Jan 30 14:43:05 crc kubenswrapper[5039]: I0130 14:43:05.121814 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} Jan 30 14:43:06 crc kubenswrapper[5039]: I0130 14:43:06.134131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} Jan 30 14:43:06 crc kubenswrapper[5039]: I0130 14:43:06.162820 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chf4d" podStartSLOduration=2.58535415 podStartE2EDuration="5.162803871s" podCreationTimestamp="2026-01-30 14:43:01 +0000 UTC" firstStartedPulling="2026-01-30 14:43:03.096160247 +0000 UTC m=+5947.756841474" lastFinishedPulling="2026-01-30 14:43:05.673609968 +0000 UTC m=+5950.334291195" observedRunningTime="2026-01-30 14:43:06.157789365 +0000 UTC m=+5950.818470602" watchObservedRunningTime="2026-01-30 14:43:06.162803871 +0000 UTC m=+5950.823485098" Jan 30 14:43:07 crc kubenswrapper[5039]: I0130 14:43:07.742900 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:07 crc kubenswrapper[5039]: I0130 14:43:07.742986 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.017126 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.017698 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.078942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.218635 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.313599 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.200824 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chf4d" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" containerID="cri-o://0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" gracePeriod=2 Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.628285 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.825005 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities" (OuterVolumeSpecName: "utilities") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.832210 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx" (OuterVolumeSpecName: "kube-api-access-vntsx") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "kube-api-access-vntsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.887845 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925604 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925642 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925658 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209087 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" exitCode=0 Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209195 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209351 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"c6159c660333230adc448945ed5d2a8033b055bddb4fd1228cd05a1bf547f1d5"} Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209371 5039 scope.go:117] "RemoveContainer" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209228 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.228114 5039 scope.go:117] "RemoveContainer" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.247661 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.254859 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.265284 5039 scope.go:117] "RemoveContainer" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.304367 5039 scope.go:117] "RemoveContainer" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.304915 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": container with ID starting with 0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8 not found: ID does not exist" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.304969 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} err="failed to get container status \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": rpc error: code = NotFound desc = could not find container \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": container with ID starting with 0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8 not found: ID does not exist" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305002 5039 scope.go:117] "RemoveContainer" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.305357 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": container with ID starting with 86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747 not found: ID does not exist" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305388 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} err="failed to get container status \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": rpc error: code = NotFound desc = could not find container \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": container with ID starting with 86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747 not found: ID does not exist" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305405 5039 scope.go:117] "RemoveContainer" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.305687 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": container with ID starting with 7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439 not found: ID does not exist" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305721 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439"} err="failed to get container status \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": rpc error: code = NotFound desc = could not find container \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": container with ID starting with 7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439 not found: ID does not exist" Jan 30 14:43:16 crc kubenswrapper[5039]: I0130 14:43:16.103505 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" path="/var/lib/kubelet/pods/6f61109b-b039-4b86-a4c1-b2a89dbb7736/volumes" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.742742 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.743238 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.743280 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.744397 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.744455 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" gracePeriod=600 Jan 30 14:43:37 crc kubenswrapper[5039]: E0130 14:43:37.865641 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380598 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" exitCode=0 Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380684 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380937 5039 scope.go:117] "RemoveContainer" containerID="0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.381620 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:43:38 crc kubenswrapper[5039]: E0130 14:43:38.381928 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.037756 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.045253 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.053315 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.059904 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.108189 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" path="/var/lib/kubelet/pods/c11ff9c9-2927-49d7-a52b-995f63c75e72/volumes" Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.109184 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" path="/var/lib/kubelet/pods/f140476b-d9d4-4ca6-bac1-d4f91a64c18b/volumes" Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.029758 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.037193 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.104169 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" path="/var/lib/kubelet/pods/5c1e26bd-8401-41c3-b195-93755cd10148/volumes" Jan 30 14:43:51 crc kubenswrapper[5039]: I0130 14:43:51.093747 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:43:51 crc kubenswrapper[5039]: E0130 14:43:51.094335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:05 crc kubenswrapper[5039]: I0130 14:44:05.094298 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:05 crc kubenswrapper[5039]: E0130 14:44:05.095157 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.149813 5039 scope.go:117] "RemoveContainer" containerID="d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.173331 5039 scope.go:117] "RemoveContainer" containerID="ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.212049 5039 scope.go:117] "RemoveContainer" containerID="b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c" Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.042086 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.052743 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.110185 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" path="/var/lib/kubelet/pods/babc668e-cf9b-4d6c-8a45-f79e141cfc0e/volumes" Jan 30 14:44:15 crc kubenswrapper[5039]: I0130 14:44:15.023195 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:44:15 crc kubenswrapper[5039]: I0130 14:44:15.030290 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:44:16 crc kubenswrapper[5039]: I0130 14:44:16.105356 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" path="/var/lib/kubelet/pods/9c46ecdf-d569-4ebc-8963-909b6e460e18/volumes" Jan 30 14:44:18 crc kubenswrapper[5039]: I0130 14:44:18.093876 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:18 crc kubenswrapper[5039]: E0130 14:44:18.094763 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.033468 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.040367 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.103585 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" path="/var/lib/kubelet/pods/ca210a91-180c-4a6a-8334-1d294092b8a3/volumes" Jan 30 14:44:31 crc kubenswrapper[5039]: I0130 14:44:31.093941 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:31 crc kubenswrapper[5039]: E0130 14:44:31.094763 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:42 crc kubenswrapper[5039]: I0130 14:44:42.094002 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:42 crc kubenswrapper[5039]: E0130 14:44:42.094860 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:57 crc kubenswrapper[5039]: I0130 14:44:57.093375 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:57 crc kubenswrapper[5039]: E0130 14:44:57.094447 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.141702 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142382 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142398 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142413 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142419 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142438 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142445 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142616 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.143237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.146306 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.146399 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.155422 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304286 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.406929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.407041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.407157 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.408474 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.413390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.428970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.517667 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.941681 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:01 crc kubenswrapper[5039]: I0130 14:45:01.036379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerStarted","Data":"7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618"} Jan 30 14:45:02 crc kubenswrapper[5039]: I0130 14:45:02.046460 5039 generic.go:334] "Generic (PLEG): container finished" podID="b187d998-888c-405b-8275-67442b5f0b57" containerID="e27a2a02c7fa919ea78ee0900ad1b8deed5013e12bfefd255a9c0dfce4dd99ae" exitCode=0 Jan 30 14:45:02 crc kubenswrapper[5039]: I0130 14:45:02.046513 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerDied","Data":"e27a2a02c7fa919ea78ee0900ad1b8deed5013e12bfefd255a9c0dfce4dd99ae"} Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.343946 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457211 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457291 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457333 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.458353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume" (OuterVolumeSpecName: "config-volume") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.463944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.464935 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg" (OuterVolumeSpecName: "kube-api-access-rwlpg") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "kube-api-access-rwlpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.559990 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.560059 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.560073 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerDied","Data":"7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618"} Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062212 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062269 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.433742 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.443278 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.024825 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.049495 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.056840 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.062861 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.103245 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" path="/var/lib/kubelet/pods/3b2639f2-7fe0-4d37-9604-9c0260ea09d5/volumes" Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.103833 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" path="/var/lib/kubelet/pods/de9c141b-39af-4717-91c7-32de6df6ca1d/volumes" Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.104472 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" path="/var/lib/kubelet/pods/f58690d3-b736-4e20-973e-dc1a555592a1/volumes" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.319573 5039 scope.go:117] "RemoveContainer" containerID="7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.343840 5039 scope.go:117] "RemoveContainer" containerID="31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.380990 5039 scope.go:117] "RemoveContainer" containerID="d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.414958 5039 scope.go:117] "RemoveContainer" containerID="1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.474432 5039 scope.go:117] "RemoveContainer" containerID="e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.495556 5039 scope.go:117] "RemoveContainer" containerID="f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695" Jan 30 14:45:12 crc kubenswrapper[5039]: I0130 14:45:12.094173 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:45:12 crc kubenswrapper[5039]: E0130 14:45:12.094677 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:45:15 crc kubenswrapper[5039]: I0130 14:45:15.033564 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:45:15 crc kubenswrapper[5039]: I0130 14:45:15.041442 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:45:16 crc kubenswrapper[5039]: I0130 14:45:16.104499 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" path="/var/lib/kubelet/pods/00da7584-6573-4dac-bfd1-ea7c53ad5b93/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137142011024440 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137142012017356 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137125535016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137125535015464 5ustar corecore